Commit 4d30b801 authored by David Daney's avatar David Daney Committed by David S. Miller

netdev: octeon_mgmt: Fix race condition freeing TX buffers.

Under heavy load the TX cleanup tasklet and xmit threads would race
and try to free too many buffers.
Signed-off-by: default avatarDavid Daney <ddaney@caviumnetworks.com>
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent 62538d24
...@@ -189,12 +189,19 @@ static void octeon_mgmt_clean_tx_buffers(struct octeon_mgmt *p) ...@@ -189,12 +189,19 @@ static void octeon_mgmt_clean_tx_buffers(struct octeon_mgmt *p)
mix_orcnt.u64 = cvmx_read_csr(CVMX_MIXX_ORCNT(port)); mix_orcnt.u64 = cvmx_read_csr(CVMX_MIXX_ORCNT(port));
while (mix_orcnt.s.orcnt) { while (mix_orcnt.s.orcnt) {
spin_lock_irqsave(&p->tx_list.lock, flags);
mix_orcnt.u64 = cvmx_read_csr(CVMX_MIXX_ORCNT(port));
if (mix_orcnt.s.orcnt == 0) {
spin_unlock_irqrestore(&p->tx_list.lock, flags);
break;
}
dma_sync_single_for_cpu(p->dev, p->tx_ring_handle, dma_sync_single_for_cpu(p->dev, p->tx_ring_handle,
ring_size_to_bytes(OCTEON_MGMT_TX_RING_SIZE), ring_size_to_bytes(OCTEON_MGMT_TX_RING_SIZE),
DMA_BIDIRECTIONAL); DMA_BIDIRECTIONAL);
spin_lock_irqsave(&p->tx_list.lock, flags);
re.d64 = p->tx_ring[p->tx_next_clean]; re.d64 = p->tx_ring[p->tx_next_clean];
p->tx_next_clean = p->tx_next_clean =
(p->tx_next_clean + 1) % OCTEON_MGMT_TX_RING_SIZE; (p->tx_next_clean + 1) % OCTEON_MGMT_TX_RING_SIZE;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment