- 24 Feb, 2012 10 commits
-
-
Ben Greear authored
This is useful for testing RX handling of frames with bad CRCs. Requires driver support to actually put the packet on the wire properly. Signed-off-by: Ben Greear <greearb@candelatech.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Ben Greear authored
This enables enabling/disabling reception of the Ethernet FCS. This can be useful when sniffing packets. For e1000e, enabling RXFCS can change the default behaviour for how the NIC handles CRC. Disabling RXFCS will take the NIC back to defaults, which can be configured as part of the module options. Signed-off-by: Ben Greear <greearb@candelatech.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Ben Greear authored
When set on hardware that supports the feature, this causes the Ethernet FCS to be appended to the end of the skb. Useful for sniffing packets. Signed-off-by: Ben Greear <greearb@candelatech.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-
Christian Riesch authored
The CLKDIV bitfield in the MDIO Control Register is a 16 bit field, therefore the CLKDIV value may range from 0 to 0xffff. Signed-off-by: Christian Riesch <christian.riesch@omicron.at> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Christian Riesch authored
chan->chan_num is 0..CPDMA_MAX_CHANNELS-1 for tx channels and CPDMA_MAX_CHANNELS..2*CPDMA_MAX_CHANNELS-1 for rx channels. However, the rx and tx teardown registers expect zero based channel numbering. Since the upper bits of the registers are reserved, the teardown also worked before, this patch is cleanup only. Signed-off-by: Christian Riesch <christian.riesch@omicron.at> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
This will prevent double free in some cases where be_clear() is called for cleanup when be_setup() fails half-way. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
As a part of be_close(), instead of waiting for a max of 200ms for each TXQ, wait for a total of 200ms for completions from all TXQs to arrive. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Sathya Perla authored
EEH recovery involves ring cleanup and re-creation. The worker thread must not run during EEH cleanup/resume. Signed-off-by: Sathya Perla <sathya.perla@emulex.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 23 Feb, 2012 10 commits
-
-
Danny Kukawka authored
Unify return value of .ndo_set_mac_address if the given address isn't valid. Return -EADDRNOTAVAIL as eth_mac_addr() already does if is_valid_ether_addr() fails. Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Danny Kukawka authored
Unify return value of .ndo_set_mac_address if the given address isn't valid. Return -EADDRNOTAVAIL as eth_mac_addr() already does if is_valid_ether_addr() fails. Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Danny Kukawka authored
Unify return value of .ndo_set_mac_address if the given address isn't valid. Return -EADDRNOTAVAIL as eth_mac_addr() already does if is_valid_ether_addr() fails. Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Danny Kukawka authored
Unify return value of .ndo_set_mac_address if the given address isn't valid. Return -EADDRNOTAVAIL as eth_mac_addr() already does if is_valid_ether_addr() fails. Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Signed-off-by: David S. Miller <davem@davemloft.net>
-
-
Matt Carlson authored
This patch seeks to clean up the timer related code. It begins by moving one-time timer setup code from tg3_open() to tg3_init_one(). It then creates a function that encapsulates the code needed to start the timer. A tg3_timer_stop() function was added for parity. Finally, this patch moves all the timer functions to a more suitable location. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
If an error happens in the tx completion thread, tg3_reset_task will be scheduled and TX_RECOVERY_PENDING will be set. The TX_RECOVERY_PENDING flag causes tg3_poll[_msix] to return early before doing much of its work. Tg3_reset_task() gets canceled when the configuration of the device is changing, which always results in a chip reset. When this happens, the TX_RECOVERY_PENDING flag may be left set, which would unnecessarily hinder tg3_poll from doing work. This patch fixes the problem. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
tg3_phy_copper_begin() has code that configures the link advertisements through the use of the link_config.speed and link_config.duplex members. The driver does not internally use these members in this way, nor is it (currently) permitted via the ethtool interface. This patch removes the dead code. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Matt Carlson authored
The tg3 driver tried to detect link changes by comparing the tg3 local active_speed member with SPEED_UNKNOWN (or formerly SPEED_INVALID). This check is not correct, since phylib will never set its speed member to either of these two values. The code only appeared to work because tg3 initializes active_speed to SPEED_INVALID during tg3_init_one. This patch introduces a new "old_link" tg3 member and then compares the phy_device's link member against it to detect link state changes. Signed-off-by: Matt Carlson <mcarlson@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Ben Hutchings authored
efx_for_each_possible_channel_tx_queue() should do nothing for RX-only or extra channels. The current definition results in allocating additional unused hardware TX queues when using the mqprio qdisc and either separate_tx_channels or SR-IOV. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
- 22 Feb, 2012 8 commits
-
-
Ben Hutchings authored
Fix some indentation and line continuations. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
Ben Hutchings authored
We have a very simple way of allocating buffer table entries to queues, which is just to take the next one available. The extra channels are the highest numbered channels but they need to be allocated the lowest entries so that the traffic channels can be allocated new entries without any collisions. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
Ben Hutchings authored
efx_vfdi_set_status_page() validates the peer page count by calculating the size of a request containing that many addresses and comparing that with the maximum valid request size (4KB). The calculation involves a multiplication that may overflow on a 32-bit system. We use kcalloc() to allocate memory to store the addresses; that also does a multiplication and it does check for integer overflow, so any values larger than 0x1fffffff will be rejected. However, values in the range [0x1fffffffc, 0x1fffffff] pass boh tests and result in an attempt to allocate nearly 4GB on the heap. This should be rejected rather quickly as it's obviously impossible on a 32-bit system, and indeed the maximum possible heap allocation is 32MB. Still, let's make absolutely sure by fixing the initial validation. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
Ben Hutchings authored
This requirement was meant to be implied in the name 'status page'. One out-of-tree VF driver allocates a buffer using the structure size and not a full page - hence the current odd specification - but in practice that allocation will be padded and aligned to at least 4KB. Therefore, we can specify this and have the option to extend the structure up to 4KB without worrying about VF drivers using odd-shaped buffers. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
-
Eric Dumazet authored
Piergiorgio Beruto expressed the need to fetch size of first datagram in queue for AF_UNIX sockets and suggested a patch against SIOCINQ ioctl. I suggested instead to implement MSG_TRUNC support as a recv() input flag, as already done for RAW, UDP & NETLINK sockets. len = recv(fd, &byte, 1, MSG_PEEK | MSG_TRUNC); MSG_TRUNC asks recv() to return the real length of the packet, even when is was longer than the passed buffer. There is risk that a userland application used MSG_TRUNC by accident (since it had no effect on af_unix sockets) and this might break after this patch. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Tested-by: Piergiorgio Beruto <piergiorgio.beruto@gmail.com> CC: Michael Kerrisk <mtk.manpages@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Danny Kukawka authored
Use eth_mac_addr() for .ndo_set_mac_address, remove lowpan_set_address since it do currently the same as eth_mac_addr(). Additional advantage: eth_mac_addr() already checks if the given address is valid Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Acked-by: Dmitry Eremin-Solenikov <dbaryshkov@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Danny Kukawka authored
Use eth_mac_addr() for .ndo_set_mac_address, remove typhoon_set_mac_address() since it do currently the same as eth_mac_addr(). Additional advantage: eth_mac_addr() already checks if the given address is valid. Signed-off-by: Danny Kukawka <danny.kukawka@bisect.de> Acked-by: Dave Dillow <dave@thedillows.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Davidlohr Bueso authored
This driver is the last user of the IRQF_SAMPLE_RANDOM flag for net drivers, and since add_*_randomness interfaces have now deprecated the flag as a source of external noise, we can remove it. Signed-off-by: Davidlohr Bueso <dave@gnu.org> Signed-off-by: David S. Miller <davem@davemloft.net>
-
- 21 Feb, 2012 12 commits
-
-
-
John W. Linville authored
Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/linville/wireless-next into for-davem
-
Pavel Emelyanov authored
The same here -- we can protect the sk_peek_off manipulations with the unix_sk->readlock mutex. The peeking of data from a stream socket is done in the datagram style, i.e. even if there's enough room for more data in the user buffer, only the head skb's data is copied in there. This feature is preserved when peeking data from a given offset -- the data is read till the nearest skb's boundary. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
The sk_peek_off manipulations are protected with the unix_sk->readlock mutex. This mutex is enough since all we need is to syncronize setting the offset vs reading the queue head. The latter is fully covered with the mentioned lock. The recently added __skb_recv_datagram's offset is used to pick the skb to read the data from. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
This one specifies where to start MSG_PEEK-ing queue data from. When set to negative value means that MSG_PEEK works as ususally -- peeks from the head of the queue always. When some bytes are peeked from queue and the peeking offset is non negative it is moved forward so that the next peek will return next portion of data. When non-peeking recvmsg occurs and the peeking offset is non negative is is moved backward so that the next peek will still peek the proper data (i.e. the one that would have been picked if there were no non peeking recv in between). The offset is set using per-proto opteration to let the protocol handle the locking issues and to check whether the peeking offset feature is supported by the protocol the socket belongs to. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
This one is only considered for MSG_PEEK flag and the value pointed by it specifies where to start peeking bytes from. If the offset happens to point into the middle of the returned skb, the offset within this skb is put back to this very argument. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Pavel Emelyanov authored
This makes lines shorter and simplifies further patching. Signed-off-by: Pavel Emelyanov <xemul@parallels.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Joe Perches authored
isdn source code uses a not-current coding style. Update the coding style used on a per-line basis so that git diff -w shows only elided blank lines at EOF. Done with emacs and some scripts and some typing. Built x86 allyesconfig. No detected change in objdump -d or size. Signed-off-by: Joe Perches <joe@perches.com>
-
Dmitry Kravkov authored
Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
-
Dmitry Kravkov authored
The patch provides workaround for BUG in FW 7.2.16, which in GRO mode may miscalculate buffer and place on SGE one frag less than it could. It may happen only for some MTUs, we mark these MTUs with gro_check flag during device initialization or MTU change. Next FW should include fix for the issue and the patch could be reverted. Signed-off-by: Dmitry Kravkov <dmitry@broadcom.com> Signed-off-by: Eilon Greenstein <eilong@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>