- 10 Sep, 2009 21 commits
-
-
Joe Eykholt authored
The rport and discovery modules deal with remote ports before fc_remote_port_add() can be done, because the full set of rport identifiers is not known at early stages. In preparation for splitting the fc_rport/fc_rport_priv allocation, make fc_rport_priv the primary interface for the remote port and discovery engines. The FCP / SCSI layers still deal with fc_rport and fc_rport_libfc_priv, however. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Joe Eykholt authored
These macros introduce extra undesirable semicolons that keep them from being used in expressions, and they don't protect against being passed an expression. Add parens and remove the semicolons. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Joe Eykholt authored
The interface for lport->tt.rport_create() takes a fc_disc_port arg, which is unnatural for most calls. The only reason for this was to avoid passing in the local port as an argument, but otherwise added to complexity. Simplify by just using lport and fc_rport_identifiers. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Joe Eykholt authored
While the I/O and LLD interfaces use fc_rport_libfc_priv, the disc and rport interfaces will use fc_rport_priv, which will be separately allocated. Change the disc and rport usage of fc_rport_libfc_priv to fc_rport_priv. Use #define temporarily to make both names equivalent until a subsequent patch splits them. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
This just cuts down on the number of locks we're dealing with, and eliminates the need to take another lock in the netdev notifier. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
Fixes reference counting on fcoe_instance and net_device, and adds NETDEV_UNREGISTER notifier handling so that you can unload network drivers. FCoE no longer increments the module use count for the network driver. On an NETDEV_UNREGISTER event, destroying the FCoE instance is deferred to a workqueue context to avoid RTNL deadlocks. Based in part by an earlier patch from John Fastabend John's patch description: Currently, the netdev module ref count is not decremented with module_put() when the module is unloaded while fcoe instances are present. To fix this removed reference count on netdev module completely and added functionality to netdev event handling for NETDEV_UNREGISTER events. This allows fcoe to remove devices cleanly when the netdev module is unloaded so we no longer need to hold a reference count for the netdev module. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
We only want the FCoE create and destroy routines to deal with top level N_Ports, the VN_Ports are tracked on the vport list (see scsi_transport_fc). Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
Rather than rely on the hostlist_lock to be held while creating exchange managers, serialize fcoe instance creation and destruction with a mutex. This will allow the hostlist addition to be moved out of fcoe_if_create(), which will simplify NPIV support. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
fcoe_netdev_config() is called during initialization of a libfc instance. Much of what was there only needs to be done once for each net_device. The same goes for the corresponding cleanup. The FIP controller initialization is moved to interface creation time. Otherwise it will keep getting re-initialized for every VN_Port once NPIV is enabled. fcoe_if_destroy() has some reordering to deal with the changes. Receives are not stopped until after fcoe_interface_put() is called, but transmits must be stopped before. So there is some care to stop libfc transmits and the transmit backlog timer, then call fcoe_interface_put which will stop receives and cleanup the FIP controller, then the receive queues can be cleaned and the port freed. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
Up to this point the fcoe_instance structure was simply kzalloc/kfreed. This patch introduces create and destroy functions as well as kref based reference counting. The create function will grow as the initialization code is moved there. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
The priv pointer is no longer needed, and once NPIV is enabled fcoe_interface:fc_lport becomes a one-to-many relationship. Remove the single pointer. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
The offload EM pointer is only used when setting up a new libfc instance, but as it's designed to be shared among NPIV VN_Ports it should be tracked in fcoe_interface. With the host-list changed to track fcoe_interfaces as well, this is needed before we can remove the priv pointer from that structure (which is only there to help in the transition, and stops making sense once NPIV is enabled). Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
There is only one FIP state per net_device, so the FIP controller needs to be moved from the per-SCSI-host fcoe_port to the per-net_device fcoe_interface structure. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
The packet handlers need to be tracked in fcoe_interface so there is only one set per net_device. When NPIV is enabled there will be multiple SCSI hosts and multiple fcoe_port structures on a single net_device. The packet handlers match by ethertype and netdev. If the same handler gets registered on a single netdev multiple times, the receive function will be called multiple times for each frame. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
The network interface needs to be shared between all NPIV VN_Ports, therefor it should be tracked in the fcoe_interface and not for each SCSI host in fcoe_port. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
In preparation for NPIV support, I'm splitting the fcoe instance structure into two to remove the assumptions about it being 1:1 with the net_device. There will now be two structures, one which is 1:1 with the underlying net_device and one which is allocated per virtual SCSI/FC host. fcoe_softc is renamed to fcoe_port for the per Scsi_Host FCoE private data. Later patches with start moving shared stuff from fcoe_port to fcoe_interface Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
By passing in the parent device instead of assuming the netdev is what should be used, fcoe_if_create becomes usable for NPIV vports as well. You still need a netdev, because that's how FCoE works. Also removed some duplicate checks from fcoe_if_create that are already in fcoe_create. fcoe_if_destroy needs to take an lport as it's only argument, not a netdev. That removes the 1:1 netdev:lport assumption from the destroy path. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Joe Eykholt authored
The hostlist and the hostlist_lock were initialized both in the delcaration and in fcoe_init(). Remove the unneeded code. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
fcoe_if_init() can fail, but it's return value wasn't checked Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Chris Leech authored
Use cancel_work_sync() in place of flush_work(), so that fcoe_ctlr_destroy() can be called from a workqueue. Also, purge the receive queue after the recv_work has been cancled because if recv_work isn't run it's not guaranteed to be empty now. Signed-off-by: Chris Leech <christopher.leech@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Yi Zou authored
This adds fcoe_ddp_min as a module parameter for fcoe module to: /sys/module/fcoe/parameters/ddp_min It is observed that for some hardware, particularly Intel 82599, there is too much overhead in setting up context for direct data placement (DDP) read when the requested read I/O size is small. This is added as a module parameter for performance tuning and is set as 0 by default and user can change this based on their own hardware. Signed-off-by: Yi Zou <yi.zou@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
- 05 Sep, 2009 19 commits
-
-
Vasu Dev authored
1. Updates fcoe_rcv() to queue incoming frames to the fcoe per cpu thread on which this frame's exch was originated and simply use current cpu for request exch not originated by initiator. It is redundant to add this code under CONFIG_SMP, so removes CONFIG_SMP uses around this code. 2. Updates fc_exch_em_alloc, fc_exch_delete, fc_exch_find to use per cpu exch pools, here fc_exch_delete is rename of older fc_exch_mgr_delete_ep since ep/exch are now deleted in pools of EM and so brief new name is sufficient and better name. Updates these functions to map exch id to their index into exch pool using fc_cpu_mask, fc_cpu_order and EM min_xid. This mapping is as per detailed explanation about this in last patch and basically this is just as lower fc_cpu_mask bits of exch id as cpu number and upper bit sum of EM min_xid and exch index in pool. Uses pool next_index to keep track of exch allocation from pool along with pool_max_index as upper bound of exches array in pool. 3. Adds exch pool ptr to fc_exch to free exch to its pool in fc_exch_delete. 4. Updates fc_exch_mgr_reset to reset all exch pools of an EM, this required adding fc_exch_pool_reset func to reset exches in pool and then have fc_exch_mgr_reset call fc_exch_pool_reset for each pool within each EM for a lport. 5. Removes no longer needed exches array, em_lock, next_xid, and total_exches from struct fc_exch_mgr, these are not needed after use of per cpu exch pool, also removes not used max_read, last_read from struct fc_exch_mgr. 6. Updates locking notes for exch pool lock with fc_exch lock and uses pool lock in exch allocation, lookup and reset. Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Vasu Dev authored
Adds per cpu exch pool for these reasons:- 1. Currently an EM instance is shared across all cpus to manage all exches for all cpus. This required em_lock across all cpus for an exch alloc, free, lookup and reset each frame and that made em_lock expensive, so instead having per cpu exch pool with their own per cpu pool lock will likely reduce locking contention in fast path for an exch alloc, free and lookup. 2. Per cpu exch pool will likely improve cache hit ratio since all frames of an exch will be processed on the same cpu on which exch originated. This patch is only prep work to help in keeping complexity of next patch low, so this patch only sets up per cpu exch pool and related helper funcs to be used by next patch. The next patch fully makes use of per cpu exch pool in all code paths ie. tx, rx and reset. Divides per EM exch id range equally across all cpus to setup per cpu exch pool. This division is such that lower bits of exch id carries cpu number info on which exch originated, later a simple bitwise AND operation on exch id of incoming frame with fc_cpu_mask retrieves cpu number info to direct all frames to same cpu on which exch originated. This required a global fc_cpu_mask and fc_cpu_order initialized to max possible cpus number nr_cpu_ids rounded up to 2's power, this will be used in mapping exch id and exch ptr array index in pool during exch allocation, find or reset code paths. Adds a check in fc_exch_mgr_alloc() to ensure specified min_xid lower bits are zero since these bits are used to carry cpu info. Adds and initializes struct fc_exch_pool with all required fields to manage exches in pool. Allocates per cpu struct fc_exch_pool with memory for exches array for range of exches per pool. The exches array memory is followed by struct fc_exch_pool. Adds fc_exch_ptr_get/set() helper functions to get/set exch ptr in pool exches array at specified array index. Increases default FCOE_MAX_XID to 0x0FFF from 0x07EF, so that more exches are available per cpu after above described exch id range division across all cpus to each pool. Signed-off-by: Vasu Dev <vasu.dev@intel.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Joe Eykholt authored
If using code like this: if (foo) FCOE_DBG("foo\n); else FCOE_DBG("bar\n"); one gets compile errors because FCOE_DBG expands with its own semicolon, making one too many for the if-statement. Remove the offending semicolon in fcoe.h and also a similar case in libfcoe.c. Signed-off-by: Joe Eykholt <jeykholt@cisco.com> Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Robert Love authored
The statement reads, "Exchange timed out, notifying the upper layer", however, this statement is printed whenever the timer is armed. This is confusing to someone debugging the code because every time an exchange is initialized, there is an incorrect statement stating that the timer has already timed out. This patch changes the statement to read, "Exchange timer armed" which is more accurate. This patch also adds a debug statement in the timeout handler to properly indicate that the exchange has timed out. Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Robert Love authored
There's currently no space between the interface name and the user specified format/string. This patch adds a space and a colon to the output to separate the interface name and the user specified string. So, instead of "ethXfoo" it will read "ethX: foo". Signed-off-by: Robert Love <robert.w.love@intel.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
If we had multiple tasks on the cmd or requeue lists, and iscsi_tcp returns a error, the write_space function can still run and queue iscsi_data_xmit. If it was a legetimate problem and iscsi_conn_failure was run but we raced and iscsi_data_xmit was run first it could miss the suspend bit checks, and start trying to send data again and hit another timeout. A similar problem is present when using cxgb3i. This has libiscsi check the suspend bit before calling the xmit task callout, so we at least do not try sending multiple tasks (one could be sent). Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
If a target closed the connection, we will detect it in the state_changed or data_ready callout. This adds a new conn error value to use for this problem, so it is not confused with when the initiator throws a conn error and drops the connection. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
Logging for connections and sessions in the scsi_transport_iscsi module is now controlled by module parameters. Signed-off-by: Erez Zilber <erezzi.list@gmail.com> [Mike Christie: newline fixups and modification of some dbg statements] Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
The residual variable is only valid for udnerrun so do not print it out for the overrun case. Signed-off-by: Karen Higgins <karen.higgins@qlogic.com> [Mike Christie: Fix coding style issues in patch] Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
If we sent multiple pdus as immediate the target could be rejecting some and we have just been dropping the rejection notification. This adds code to handle nop-out rejections, so if a nop-out was sent as a ping and rejected we do not mark the connection bad. Instead we just clean up the timers since we have pdu making a rount trip we know the connection is good. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Mike Christie authored
We increment session->cmdsn at the top of iscsi_prep_scsi_cmd_pdu, but if the prep ecb or prep bidi or init_task calls fails then we leave the session->cmdsn incremented. This moves the cmdsn manipulation to the end of the function when we know it has succeeded. It also adds a session->cmdsn--; in queuecommand for if a driver like bnx2i tries to send a a task from that context but it fails. We do not have to do this in the xmit thread context because that code will retry the same task if the initial call fails. Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Hannes Reinecke authored
The network core will call the state_change() callback prior to the data_ready() callback, which might cause us to lose a connection state change. So we have to evaluate the socket state at the end of the data_ready() callback, too. Signed-off-by: Hannes Reinecke <hare@suse.de> Signed-off-by: Mike Christie <michaelc@cs.wisc.edu> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Andrew Vasquez authored
ISPs which support this feature include 23xx and above. Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com> Signed-off-by: Giridhar Malavali <giridhar.malavali@qlogic.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Andrew Vasquez authored
Lay groundwork for adding alternative asynchronous operations by generalize and extending the SRB structure. This allows for follow-on patches to add support for: - Asynchronous logins. - ELS/CT passthru requests. - Loopback requests. - Non-blocking mailbox commands (ABTS, Task Management, etc). Signed-off-by: Andrew Vasquez <andrew.vasquez@qlogic.com> Signed-off-by: Giridhar Malavali <giridhar.malavali@qlogic.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Kashyap, Desai authored
Bump version to 01.100.06.00 Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com> Reviewed-by: : Eric Moore <Eric.moore@lsi.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Kashyap, Desai authored
Cleaned up base_interrupt routine to be more effiecent. Deleted about a third of the config page API by moving redundant code from all the calling functions to _config_request. Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com> Reviewed-by: Eric Moore <Eric.moore@lsi.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Kashyap, Desai authored
This patch modifies the slave_configure callback so the messages that get sent to system log for RAID1E volumes contain the string "RAID10" instead of "RAID1E". These messages contain information regarding what kind of scsi device is being added. Certain OEMS can enable displaying the RAID10 string instead of RAID1E via manufacturing page 10. The driver will read this config page at driver load time, then determine from the GenericFlags0 bits whether display the RAID10 or RAID1E string, also even drive count is taken into consideration. Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com> Reviewed-by: Eric Moore <Eric.moore@lsi.com> Cc: Stable Tree <stable@kernel.org> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Kashyap, Desai authored
Changing SDEV Running state from interrupt context. Previously It was handle in work queue thread. With this change It will not wait for work queue thread to execute scsih_ublock_io_device to put SDEV into Running state. This will reduce delay for Device becoming RUNNING. Modified this patch considering James comment "Not to change SDEV state using scsi_device_set_state API, instead use scsi_internal_device_unblock scsi_internal_device_block API" Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com> Reviewed-by: Eric Moore <Eric.moore@lsi.com> Cc: Stable Tree <stable@kernel.org> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-
Kashyap, Desai authored
Deleted the wrapper function called _scsih_link_change. This function was implemented for compatibility reasons only, between different kernel versions. Currently this function is no longer needed. The calling function are converted to calling mpt2sas_transport_update_phy_link_change directly in the transport layer. Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com> Reviewed-by: Eric Moore <Eric.moore@lsi.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de>
-