Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
L
linux
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
linux
Commits
2407e34e
Commit
2407e34e
authored
Mar 29, 2004
by
David S. Miller
Browse files
Options
Browse Files
Download
Plain Diff
Merge nuts.davemloft.net:/disk1/BK/network-2.6
into nuts.davemloft.net:/disk1/BK/net-2.6
parents
f0fdf5f8
faf1633b
Changes
10
Show whitespace changes
Inline
Side-by-side
Showing
10 changed files
with
595 additions
and
146 deletions
+595
-146
Documentation/networking/packet_mmap.txt
Documentation/networking/packet_mmap.txt
+412
-0
net/Makefile
net/Makefile
+3
-1
net/ipv4/netfilter/ip_nat_standalone.c
net/ipv4/netfilter/ip_nat_standalone.c
+10
-1
net/ipv4/netfilter/ipt_MASQUERADE.c
net/ipv4/netfilter/ipt_MASQUERADE.c
+1
-1
net/ipv4/tcp_ipv4.c
net/ipv4/tcp_ipv4.c
+6
-3
net/ipv6/Makefile
net/ipv6/Makefile
+2
-0
net/ipv6/exthdrs.c
net/ipv6/exthdrs.c
+0
-102
net/ipv6/exthdrs_core.c
net/ipv6/exthdrs_core.c
+108
-0
net/ipv6/ipv6_syms.c
net/ipv6/ipv6_syms.c
+0
-2
net/packet/af_packet.c
net/packet/af_packet.c
+53
-36
No files found.
Documentation/networking/packet_mmap.txt
0 → 100644
View file @
2407e34e
DaveM:
If you agree with it I will send two small patches to modify
kernel's configure help.
Ulisses
--------------------------------------------------------------------------------
+ ABSTRACT
--------------------------------------------------------------------------------
This file documents the CONFIG_PACKET_MMAP option available with the PACKET
socket interface on 2.4 and 2.6 kernels. This type of sockets is used for
capture network traffic with utilities like tcpdump or any other that uses
the libpcap library.
You can find the latest version of this document at
http://pusa.uv.es/~ulisses/packet_mmap/
Please send me your comments to
Ulisses Alonso Camaró <uaca@i.hate.spam.alumni.uv.es>
-------------------------------------------------------------------------------
+ Why use PACKET_MMAP
--------------------------------------------------------------------------------
In Linux 2.4/2.6 if PACKET_MMAP is not enabled, the capture process is very
inefficient. It uses very limited buffers and requires one system call
to capture each packet, it requires two if you want to get packet's
timestamp (like libpcap always does).
In the other hand PACKET_MMAP is very efficient. PACKET_MMAP provides a size
configurable circular buffer mapped in user space. This way reading packets just
needs to wait for them, most of the time there is no need to issue a single
system call. By using a shared buffer between the kernel and the user
also has the benefit of minimizing packet copies.
It's fine to use PACKET_MMAP to improve the performance of the capture process,
but it isn't everything. At least, if you are capturing at high speeds (this
is relative to the cpu speed), you should check if the device driver of your
network interface card supports some sort of interrupt load mitigation or
(even better) if it supports NAPI, also make sure it is enabled.
--------------------------------------------------------------------------------
+ How to use CONFIG_PACKET_MMAP
--------------------------------------------------------------------------------
From the user standpoint, you should use the higher level libpcap library, wich
is a de facto standard, portable across nearly all operating systems
including Win32.
Said that, at time of this writing, official libpcap 0.8.1 is out and doesn't include
support for PACKET_MMAP, and also probably the libpcap included in your distribution.
I'm aware of two implementations of PACKET_MMAP in libpcap:
http://pusa.uv.es/~ulisses/packet_mmap/ (by Simon Patarin, based on libpcap 0.6.2)
http://public.lanl.gov/cpw/ (by Phil Wood, based on lastest libpcap)
The rest of this document is intended for people who want to understand
the low level details or want to improve libpcap by including PACKET_MMAP
support.
--------------------------------------------------------------------------------
+ How to use CONFIG_PACKET_MMAP directly
--------------------------------------------------------------------------------
From the system calls stand point, the use of PACKET_MMAP involves
the following process:
[setup] socket() -------> creation of the capture socket
setsockopt() ---> allocation of the circular buffer (ring)
mmap() ---------> maping of the allocated buffer to the
user process
[capture] poll() ---------> to wait for incoming packets
[shutdown] close() --------> destruction of the capture socket and
deallocation of all associated
resources.
socket creation and destruction is straight forward, and is done
the same way with or without PACKET_MMAP:
int fd;
fd= socket(PF_PACKET, mode, htons(ETH_P_ALL))
where mode is SOCK_RAW for the raw interface were link level
information can be captured or SOCK_DGRAM for the cooked
interface where link level information capture is not
supported and a link level pseudo-header is provided
by the kernel.
The destruction of the socket and all associated resources
is done by a simple call to close(fd).
Next I will describe PACKET_MMAP settings and it's constraints,
also the maping of the circular buffer in the user process and
the use of this buffer.
--------------------------------------------------------------------------------
+ PACKET_MMAP settings
--------------------------------------------------------------------------------
To setup PACKET_MMAP from user level code is done with a call like
setsockopt(fd, SOL_PACKET, PACKET_RX_RING, (void *) &req, sizeof(req))
The most significant argument in the previous call is the req parameter,
this parameter must to have the following structure:
struct tpacket_req
{
unsigned int tp_block_size; /* Minimal size of contiguous block */
unsigned int tp_block_nr; /* Number of blocks */
unsigned int tp_frame_size; /* Size of frame */
unsigned int tp_frame_nr; /* Total number of frames */
};
This structure is defined in /usr/include/linux/if_packet.h and establishes a
circular buffer (ring) of unswappable memory mapped in the capture process.
Being mapped in the capture process allows reading the captured frames and
related meta-information like timestamps without requiring a system call.
Captured frames are grouped in blocks. Each block is a physically contiguous
region of memory and holds tp_block_size/tp_frame_size frames. The total number
of blocks is tp_block_nr. Note that tp_frame_nr is a redundant parameter because
frames_per_block = tp_block_size/tp_frame_size
indeed, packet_set_ring checks that the following condition is true
frames_per_block * tp_block_nr == tp_frame_nr
Lets see an example, with the following values:
tp_block_size= 4096
tp_frame_size= 2048
tp_block_nr = 4
tp_frame_nr = 8
we will get the following buffer structure:
block #1 block #2
+---------+---------+ +---------+---------+
| frame 1 | frame 2 | | frame 3 | frame 4 |
+---------+---------+ +---------+---------+
block #3 block #4
+---------+---------+ +---------+---------+
| frame 5 | frame 6 | | frame 7 | frame 8 |
+---------+---------+ +---------+---------+
A frame can be of any size with the only condition it can fit in a block. A block
can only hold an integer number of frames, or in other words, a frame cannot
be spawn accross two blocks so there are some datails you have to take into
account when choosing the frame_size. See "Maping and use of the circular
buffer (ring)".
--------------------------------------------------------------------------------
+ PACKET_MMAP setting constraints
--------------------------------------------------------------------------------
In kernel versions prior to 2.4.26 (for the 2.4 branch) and 2.6.5 (2.6 branch),
the PACKET_MMAP buffer could hold only 32768 frames in a 32 bit architecture or
16384 in a 64 bit architecture. For information on these kernel versions
see http://pusa.uv.es/~ulisses/packet_mmap/packet_mmap.pre-2.4.26_2.6.5.txt
Block size limit
------------------
As stated earlier, each block is a contiguous physical region of memory. These
memory regions are allocated with calls to the __get_free_pages() function. As
the name indicates, this function allocates pages of memory, and the second
argument is "order" or a power of two number of pages, that is
(for PAGE_SIZE == 4096) order=0 ==> 4096 bytes, order=1 ==> 8192 bytes,
order=2 ==> 16384 bytes, etc. The maximum size of a
region allocated by __get_free_pages is determined by the MAX_ORDER macro. More
precisely the limit can be calculated as:
PAGE_SIZE << MAX_ORDER
In a i386 architecture PAGE_SIZE is 4096 bytes
In a 2.4/i386 kernel MAX_ORDER is 10
In a 2.6/i386 kernel MAX_ORDER is 11
So get_free_pages can allocate as much as 4MB or 8MB in a 2.4/2.6 kernel
respectively, with an i386 architecture.
User space programs can include /usr/include/sys/user.h and
/usr/include/linux/mmzone.h to get PAGE_SIZE MAX_ORDER declarations.
The pagesize can also be determined dynamically with the getpagesize (2)
system call.
Block number limit
--------------------
To understand the constraints of PACKET_MMAP, we have to see the structure
used to hold the pointers to each block.
Currently, this structure is a dynamically allocated vector with kmalloc
called pg_vec, its size limits the number of blocks that can be allocated.
+---+---+---+---+
| x | x | x | x |
+---+---+---+---+
| | | |
| | | v
| | v block #4
| v block #3
v block #2
block #1
kmalloc allocates any number of bytes of phisically contiguous memory from
a pool of pre-determined sizes. This pool of memory is mantained by the slab
allocator wich is at the end the responsible for doing the allocation and
hence wich imposes the maximum memory that kmalloc can allocate.
In a 2.4/2.6 kernel and the i386 architecture, the limit is 131072 bytes. The
predetermined sizes that kmalloc uses can be checked in the "size-<bytes>"
entries of /proc/slabinfo
In a 32 bit architecture, pointers are 4 bytes long, so the total number of
pointers to blocks is
131072/4 = 32768 blocks
PACKET_MMAP buffer size calculator
------------------------------------
Definitions:
<size-max> : is the maximum size of allocable with kmalloc (see /proc/slabinfo)
<pointer size>: depends on the architecture -- sizeof(void *)
<page size> : depends on the architecture -- PAGE_SIZE or getpagesize (2)
<max-order> : is the value defined with MAX_ORDER
<frame size> : it's an upper bound of frame's capture size (more on this later)
from these definitions we will derive
<block number> = <size-max>/<pointer size>
<block size> = <pagesize> << <max-order>
so, the max buffer size is
<block number> * <block size>
and, the number of frames be
<block number> * <block size> / <frame size>
Suposse the following parameters, wich apply for 2.6 kernel and an
i386 architecture:
<size-max> = 131072 bytes
<pointer size> = 4 bytes
<pagesize> = 4096 bytes
<max-order> = 11
and a value for <frame size> of 2048 byteas. These parameters will yield
<block number> = 131072/4 = 32768 blocks
<block size> = 4096 << 11 = 8 MiB.
and hence the buffer will have a 262144 MiB size. So it can hold
262144 MiB / 2048 bytes = 134217728 frames
Actually, this buffer size is not possible with an i386 architecture.
Remember that the memory is allocated in kernel space, in the case of
an i386 kernel's memory size is limited to 1GiB.
All memory allocations are not freed until the socket is closed. The memory
allocations are done with GFP_KERNEL priority, this basically means that
the allocation can wait and swap other process' memory in order to allocate
the nececessary memory, so normally limits can be reached.
Other constraints
-------------------
If you check the source code you will see that what I draw here as a frame
is not only the link level frame. At the begining of each frame there is a
header called struct tpacket_hdr used in PACKET_MMAP to hold link level's frame
meta information like timestamp. So what we draw here a frame it's really
the following (from include/linux/if_packet.h):
/*
Frame structure:
- Start. Frame must be aligned to TPACKET_ALIGNMENT=16
- struct tpacket_hdr
- pad to TPACKET_ALIGNMENT=16
- struct sockaddr_ll
- Gap, chosen so that packet data (Start+tp_net) alignes to
TPACKET_ALIGNMENT=16
- Start+tp_mac: [ Optional MAC header ]
- Start+tp_net: Packet data, aligned to TPACKET_ALIGNMENT=16.
- Pad to align to TPACKET_ALIGNMENT=16
*/
The following are conditions that are checked in packet_set_ring
tp_block_size must be a multiple of PAGE_SIZE (1)
tp_frame_size must be greater than TPACKET_HDRLEN (obvious)
tp_frame_size must be a multiple of TPACKET_ALIGNMENT
tp_frame_nr must be exactly frames_per_block*tp_block_nr
Note that tp_block_size should be choosed to be a power of two or there will
be a waste of memory.
--------------------------------------------------------------------------------
+ Maping and use of the circular buffer (ring)
--------------------------------------------------------------------------------
The maping of the buffer in the user process is done with the conventional
mmap function. Even the circular buffer is compound of several physically
discontiguous blocks of memory, they are contiguous to the user space, hence
just one call to mmap is needed:
mmap(0, size, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
If tp_frame_size is a divisor of tp_block_size frames will be
contiguosly spaced by tp_frame_size bytes. If not, each
tp_block_size/tp_frame_size frames there will be a gap between
the frames. This is because a frame cannot be spawn across two
blocks.
At the beginning of each frame there is an status field (see
struct tpacket_hdr). If this field is 0 means that the frame is ready
to be used for the kernel, If not, there is a frame the user can read
and the following flags apply:
from include/linux/if_packet.h
#define TP_STATUS_COPY 2
#define TP_STATUS_LOSING 4
#define TP_STATUS_CSUMNOTREADY 8
TP_STATUS_COPY : This flag indicates that the frame (and associated
meta information) has been truncated because it's
larger than tp_frame_size. This packet can be
read entirely with recvfrom().
In order to make this work it must to be
enabled previously with setsockopt() and
the PACKET_COPY_THRESH option.
The number of frames than can be buffered to
be read with recvfrom is limited like a normal socket.
See the SO_RCVBUF option in the socket (7) man page.
TP_STATUS_LOSING : indicates there were packet drops from last time
statistics where checked with getsockopt() and
the PACKET_STATISTICS option.
TP_STATUS_CSUMNOTREADY: currently it's used for outgoing IP packets wich
it's checksum will be done in hardware. So while
reading the packet we should not try to check the
checksum.
for convenience there are also the following defines:
#define TP_STATUS_KERNEL 0
#define TP_STATUS_USER 1
The kernel initializes all frames to TP_STATUS_KERNEL, when the kernel
receives a packet it puts in the buffer and updates the status with
at least the TP_STATUS_USER flag. Then the user can read the packet,
once the packet is read the user must zero the status field, so the kernel
can use again that frame buffer.
The user can use poll (any other variant should apply too) to check if new
packets are in the ring:
struct pollfd pfd;
pfd.fd = fd;
pfd.revents = 0;
pfd.events = POLLIN|POLLRDNORM|POLLERR;
if (status == TP_STATUS_KERNEL)
retval = poll(&pfd, 1, timeout);
It doesn't incur in a race condition to first check the status value and
then poll for frames.
--------------------------------------------------------------------------------
+ THANKS
--------------------------------------------------------------------------------
Jesse Brandeburg, for fixing my grammathical/spelling errors
>>> EOF
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
\ No newline at end of file
net/Makefile
View file @
2407e34e
...
@@ -16,7 +16,9 @@ obj-$(CONFIG_LLC) += llc/
...
@@ -16,7 +16,9 @@ obj-$(CONFIG_LLC) += llc/
obj-$(CONFIG_NET)
+=
ethernet/ 802/ sched/ netlink/
obj-$(CONFIG_NET)
+=
ethernet/ 802/ sched/ netlink/
obj-$(CONFIG_INET)
+=
ipv4/ xfrm/
obj-$(CONFIG_INET)
+=
ipv4/ xfrm/
obj-$(CONFIG_UNIX)
+=
unix/
obj-$(CONFIG_UNIX)
+=
unix/
obj-$(CONFIG_IPV6)
+=
ipv6/
ifneq
($(CONFIG_IPV6),)
obj-y
+=
ipv6/
endif
obj-$(CONFIG_PACKET)
+=
packet/
obj-$(CONFIG_PACKET)
+=
packet/
obj-$(CONFIG_NET_KEY)
+=
key/
obj-$(CONFIG_NET_KEY)
+=
key/
obj-$(CONFIG_NET_SCHED)
+=
sched/
obj-$(CONFIG_NET_SCHED)
+=
sched/
...
...
net/ipv4/netfilter/ip_nat_standalone.c
View file @
2407e34e
...
@@ -124,7 +124,16 @@ ip_nat_fn(unsigned int hooknum,
...
@@ -124,7 +124,16 @@ ip_nat_fn(unsigned int hooknum,
WRITE_LOCK
(
&
ip_nat_lock
);
WRITE_LOCK
(
&
ip_nat_lock
);
/* Seen it before? This can happen for loopback, retrans,
/* Seen it before? This can happen for loopback, retrans,
or local packets.. */
or local packets.. */
if
(
!
(
info
->
initialized
&
(
1
<<
maniptype
)))
{
if
(
!
(
info
->
initialized
&
(
1
<<
maniptype
))
#ifndef CONFIG_IP_NF_NAT_LOCAL
/* If this session has already been confirmed we must not
* touch it again even if there is no mapping set up.
* Can only happen on local->local traffic with
* CONFIG_IP_NF_NAT_LOCAL disabled.
*/
&&
!
(
ct
->
status
&
IPS_CONFIRMED
)
#endif
)
{
unsigned
int
ret
;
unsigned
int
ret
;
if
(
ct
->
master
if
(
ct
->
master
...
...
net/ipv4/netfilter/ipt_MASQUERADE.c
View file @
2407e34e
...
@@ -45,7 +45,7 @@ masquerade_check(const char *tablename,
...
@@ -45,7 +45,7 @@ masquerade_check(const char *tablename,
const
struct
ip_nat_multi_range
*
mr
=
targinfo
;
const
struct
ip_nat_multi_range
*
mr
=
targinfo
;
if
(
strcmp
(
tablename
,
"nat"
)
!=
0
)
{
if
(
strcmp
(
tablename
,
"nat"
)
!=
0
)
{
DEBUGP
(
"masquerade_check: bad table `%s'.
\n
"
,
table
);
DEBUGP
(
"masquerade_check: bad table `%s'.
\n
"
,
table
name
);
return
0
;
return
0
;
}
}
if
(
targinfosize
!=
IPT_ALIGN
(
sizeof
(
*
mr
)))
{
if
(
targinfosize
!=
IPT_ALIGN
(
sizeof
(
*
mr
)))
{
...
...
net/ipv4/tcp_ipv4.c
View file @
2407e34e
...
@@ -1825,12 +1825,15 @@ int tcp_v4_rcv(struct sk_buff *skb)
...
@@ -1825,12 +1825,15 @@ int tcp_v4_rcv(struct sk_buff *skb)
goto
discard_it
;
goto
discard_it
;
do_time_wait:
do_time_wait:
if
(
!
xfrm4_policy_check
(
NULL
,
XFRM_POLICY_IN
,
skb
))
if
(
!
xfrm4_policy_check
(
NULL
,
XFRM_POLICY_IN
,
skb
))
{
goto
discard_and_relse
;
tcp_tw_put
((
struct
tcp_tw_bucket
*
)
sk
);
goto
discard_it
;
}
if
(
skb
->
len
<
(
th
->
doff
<<
2
)
||
tcp_checksum_complete
(
skb
))
{
if
(
skb
->
len
<
(
th
->
doff
<<
2
)
||
tcp_checksum_complete
(
skb
))
{
TCP_INC_STATS_BH
(
TcpInErrs
);
TCP_INC_STATS_BH
(
TcpInErrs
);
goto
discard_and_relse
;
tcp_tw_put
((
struct
tcp_tw_bucket
*
)
sk
);
goto
discard_it
;
}
}
switch
(
tcp_timewait_state_process
((
struct
tcp_tw_bucket
*
)
sk
,
switch
(
tcp_timewait_state_process
((
struct
tcp_tw_bucket
*
)
sk
,
skb
,
th
,
skb
->
len
))
{
skb
,
th
,
skb
->
len
))
{
...
...
net/ipv6/Makefile
View file @
2407e34e
...
@@ -19,3 +19,5 @@ obj-$(CONFIG_INET6_IPCOMP) += ipcomp6.o
...
@@ -19,3 +19,5 @@ obj-$(CONFIG_INET6_IPCOMP) += ipcomp6.o
obj-$(CONFIG_NETFILTER)
+=
netfilter/
obj-$(CONFIG_NETFILTER)
+=
netfilter/
obj-$(CONFIG_IPV6_TUNNEL)
+=
ip6_tunnel.o
obj-$(CONFIG_IPV6_TUNNEL)
+=
ip6_tunnel.o
obj-y
+=
exthdrs_core.o
net/ipv6/exthdrs.c
View file @
2407e34e
...
@@ -633,105 +633,3 @@ ipv6_dup_options(struct sock *sk, struct ipv6_txoptions *opt)
...
@@ -633,105 +633,3 @@ ipv6_dup_options(struct sock *sk, struct ipv6_txoptions *opt)
}
}
return
opt2
;
return
opt2
;
}
}
/*
* find out if nexthdr is a well-known extension header or a protocol
*/
int
ipv6_ext_hdr
(
u8
nexthdr
)
{
/*
* find out if nexthdr is an extension header or a protocol
*/
return
(
(
nexthdr
==
NEXTHDR_HOP
)
||
(
nexthdr
==
NEXTHDR_ROUTING
)
||
(
nexthdr
==
NEXTHDR_FRAGMENT
)
||
(
nexthdr
==
NEXTHDR_AUTH
)
||
(
nexthdr
==
NEXTHDR_NONE
)
||
(
nexthdr
==
NEXTHDR_DEST
)
);
}
/*
* Skip any extension headers. This is used by the ICMP module.
*
* Note that strictly speaking this conflicts with RFC 2460 4.0:
* ...The contents and semantics of each extension header determine whether
* or not to proceed to the next header. Therefore, extension headers must
* be processed strictly in the order they appear in the packet; a
* receiver must not, for example, scan through a packet looking for a
* particular kind of extension header and process that header prior to
* processing all preceding ones.
*
* We do exactly this. This is a protocol bug. We can't decide after a
* seeing an unknown discard-with-error flavour TLV option if it's a
* ICMP error message or not (errors should never be send in reply to
* ICMP error messages).
*
* But I see no other way to do this. This might need to be reexamined
* when Linux implements ESP (and maybe AUTH) headers.
* --AK
*
* This function parses (probably truncated) exthdr set "hdr"
* of length "len". "nexthdrp" initially points to some place,
* where type of the first header can be found.
*
* It skips all well-known exthdrs, and returns pointer to the start
* of unparsable area i.e. the first header with unknown type.
* If it is not NULL *nexthdr is updated by type/protocol of this header.
*
* NOTES: - if packet terminated with NEXTHDR_NONE it returns NULL.
* - it may return pointer pointing beyond end of packet,
* if the last recognized header is truncated in the middle.
* - if packet is truncated, so that all parsed headers are skipped,
* it returns NULL.
* - First fragment header is skipped, not-first ones
* are considered as unparsable.
* - ESP is unparsable for now and considered like
* normal payload protocol.
* - Note also special handling of AUTH header. Thanks to IPsec wizards.
*
* --ANK (980726)
*/
int
ipv6_skip_exthdr
(
const
struct
sk_buff
*
skb
,
int
start
,
u8
*
nexthdrp
,
int
len
)
{
u8
nexthdr
=
*
nexthdrp
;
while
(
ipv6_ext_hdr
(
nexthdr
))
{
struct
ipv6_opt_hdr
hdr
;
int
hdrlen
;
if
(
len
<
(
int
)
sizeof
(
struct
ipv6_opt_hdr
))
return
-
1
;
if
(
nexthdr
==
NEXTHDR_NONE
)
return
-
1
;
if
(
skb_copy_bits
(
skb
,
start
,
&
hdr
,
sizeof
(
hdr
)))
BUG
();
if
(
nexthdr
==
NEXTHDR_FRAGMENT
)
{
unsigned
short
frag_off
;
if
(
skb_copy_bits
(
skb
,
start
+
offsetof
(
struct
frag_hdr
,
frag_off
),
&
frag_off
,
sizeof
(
frag_off
)))
{
return
-
1
;
}
if
(
ntohs
(
frag_off
)
&
~
0x7
)
break
;
hdrlen
=
8
;
}
else
if
(
nexthdr
==
NEXTHDR_AUTH
)
hdrlen
=
(
hdr
.
hdrlen
+
2
)
<<
2
;
else
hdrlen
=
ipv6_optlen
(
&
hdr
);
nexthdr
=
hdr
.
nexthdr
;
len
-=
hdrlen
;
start
+=
hdrlen
;
}
*
nexthdrp
=
nexthdr
;
return
start
;
}
net/ipv6/exthdrs_core.c
0 → 100644
View file @
2407e34e
/*
* IPv6 library code, needed by static components when full IPv6 support is
* not configured or static.
*/
#include <net/ipv6.h>
/*
* find out if nexthdr is a well-known extension header or a protocol
*/
int
ipv6_ext_hdr
(
u8
nexthdr
)
{
/*
* find out if nexthdr is an extension header or a protocol
*/
return
(
(
nexthdr
==
NEXTHDR_HOP
)
||
(
nexthdr
==
NEXTHDR_ROUTING
)
||
(
nexthdr
==
NEXTHDR_FRAGMENT
)
||
(
nexthdr
==
NEXTHDR_AUTH
)
||
(
nexthdr
==
NEXTHDR_NONE
)
||
(
nexthdr
==
NEXTHDR_DEST
)
);
}
/*
* Skip any extension headers. This is used by the ICMP module.
*
* Note that strictly speaking this conflicts with RFC 2460 4.0:
* ...The contents and semantics of each extension header determine whether
* or not to proceed to the next header. Therefore, extension headers must
* be processed strictly in the order they appear in the packet; a
* receiver must not, for example, scan through a packet looking for a
* particular kind of extension header and process that header prior to
* processing all preceding ones.
*
* We do exactly this. This is a protocol bug. We can't decide after a
* seeing an unknown discard-with-error flavour TLV option if it's a
* ICMP error message or not (errors should never be send in reply to
* ICMP error messages).
*
* But I see no other way to do this. This might need to be reexamined
* when Linux implements ESP (and maybe AUTH) headers.
* --AK
*
* This function parses (probably truncated) exthdr set "hdr"
* of length "len". "nexthdrp" initially points to some place,
* where type of the first header can be found.
*
* It skips all well-known exthdrs, and returns pointer to the start
* of unparsable area i.e. the first header with unknown type.
* If it is not NULL *nexthdr is updated by type/protocol of this header.
*
* NOTES: - if packet terminated with NEXTHDR_NONE it returns NULL.
* - it may return pointer pointing beyond end of packet,
* if the last recognized header is truncated in the middle.
* - if packet is truncated, so that all parsed headers are skipped,
* it returns NULL.
* - First fragment header is skipped, not-first ones
* are considered as unparsable.
* - ESP is unparsable for now and considered like
* normal payload protocol.
* - Note also special handling of AUTH header. Thanks to IPsec wizards.
*
* --ANK (980726)
*/
int
ipv6_skip_exthdr
(
const
struct
sk_buff
*
skb
,
int
start
,
u8
*
nexthdrp
,
int
len
)
{
u8
nexthdr
=
*
nexthdrp
;
while
(
ipv6_ext_hdr
(
nexthdr
))
{
struct
ipv6_opt_hdr
hdr
;
int
hdrlen
;
if
(
len
<
(
int
)
sizeof
(
struct
ipv6_opt_hdr
))
return
-
1
;
if
(
nexthdr
==
NEXTHDR_NONE
)
return
-
1
;
if
(
skb_copy_bits
(
skb
,
start
,
&
hdr
,
sizeof
(
hdr
)))
BUG
();
if
(
nexthdr
==
NEXTHDR_FRAGMENT
)
{
unsigned
short
frag_off
;
if
(
skb_copy_bits
(
skb
,
start
+
offsetof
(
struct
frag_hdr
,
frag_off
),
&
frag_off
,
sizeof
(
frag_off
)))
{
return
-
1
;
}
if
(
ntohs
(
frag_off
)
&
~
0x7
)
break
;
hdrlen
=
8
;
}
else
if
(
nexthdr
==
NEXTHDR_AUTH
)
hdrlen
=
(
hdr
.
hdrlen
+
2
)
<<
2
;
else
hdrlen
=
ipv6_optlen
(
&
hdr
);
nexthdr
=
hdr
.
nexthdr
;
len
-=
hdrlen
;
start
+=
hdrlen
;
}
*
nexthdrp
=
nexthdr
;
return
start
;
}
EXPORT_SYMBOL
(
ipv6_ext_hdr
);
EXPORT_SYMBOL
(
ipv6_skip_exthdr
);
net/ipv6/ipv6_syms.c
View file @
2407e34e
...
@@ -41,9 +41,7 @@ EXPORT_SYMBOL(xfrm6_rcv);
...
@@ -41,9 +41,7 @@ EXPORT_SYMBOL(xfrm6_rcv);
#endif
#endif
EXPORT_SYMBOL
(
rt6_lookup
);
EXPORT_SYMBOL
(
rt6_lookup
);
EXPORT_SYMBOL
(
fl6_sock_lookup
);
EXPORT_SYMBOL
(
fl6_sock_lookup
);
EXPORT_SYMBOL
(
ipv6_ext_hdr
);
EXPORT_SYMBOL
(
ip6_append_data
);
EXPORT_SYMBOL
(
ip6_append_data
);
EXPORT_SYMBOL
(
ip6_flush_pending_frames
);
EXPORT_SYMBOL
(
ip6_flush_pending_frames
);
EXPORT_SYMBOL
(
ip6_push_pending_frames
);
EXPORT_SYMBOL
(
ip6_push_pending_frames
);
EXPORT_SYMBOL
(
ipv6_push_nfrag_opts
);
EXPORT_SYMBOL
(
ipv6_push_nfrag_opts
);
EXPORT_SYMBOL
(
ipv6_skip_exthdr
);
net/packet/af_packet.c
View file @
2407e34e
...
@@ -34,6 +34,8 @@
...
@@ -34,6 +34,8 @@
* Alexey Kuznetsov : Untied from IPv4 stack.
* Alexey Kuznetsov : Untied from IPv4 stack.
* Cyrus Durgin : Fixed kerneld for kmod.
* Cyrus Durgin : Fixed kerneld for kmod.
* Michal Ostrowski : Module initialization cleanup.
* Michal Ostrowski : Module initialization cleanup.
* Ulises Alonso : Frame number limit removal and
* packet_set_ring memory leak.
*
*
* This program is free software; you can redistribute it and/or
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* modify it under the terms of the GNU General Public License
...
@@ -168,30 +170,47 @@ static void packet_flush_mclist(struct sock *sk);
...
@@ -168,30 +170,47 @@ static void packet_flush_mclist(struct sock *sk);
struct
packet_opt
struct
packet_opt
{
{
struct
tpacket_stats
stats
;
#ifdef CONFIG_PACKET_MMAP
unsigned
long
*
pg_vec
;
unsigned
int
head
;
unsigned
int
frames_per_block
;
unsigned
int
frame_size
;
unsigned
int
frame_max
;
int
copy_thresh
;
#endif
struct
packet_type
prot_hook
;
struct
packet_type
prot_hook
;
spinlock_t
bind_lock
;
spinlock_t
bind_lock
;
char
running
;
/* prot_hook is attached*/
char
running
;
/* prot_hook is attached*/
int
ifindex
;
/* bound device */
int
ifindex
;
/* bound device */
unsigned
short
num
;
unsigned
short
num
;
struct
tpacket_stats
stats
;
#ifdef CONFIG_PACKET_MULTICAST
#ifdef CONFIG_PACKET_MULTICAST
struct
packet_mclist
*
mclist
;
struct
packet_mclist
*
mclist
;
#endif
#endif
#ifdef CONFIG_PACKET_MMAP
#ifdef CONFIG_PACKET_MMAP
atomic_t
mapped
;
atomic_t
mapped
;
unsigned
long
*
pg_vec
;
unsigned
int
pg_vec_order
;
unsigned
int
pg_vec_order
;
unsigned
int
pg_vec_pages
;
unsigned
int
pg_vec_pages
;
unsigned
int
pg_vec_len
;
unsigned
int
pg_vec_len
;
struct
tpacket_hdr
**
iovec
;
unsigned
int
frame_size
;
unsigned
int
iovmax
;
unsigned
int
head
;
int
copy_thresh
;
#endif
#endif
};
};
#ifdef CONFIG_PACKET_MMAP
static
inline
unsigned
long
packet_lookup_frame
(
struct
packet_opt
*
po
,
unsigned
int
position
)
{
unsigned
int
pg_vec_pos
,
frame_offset
;
unsigned
long
frame
;
pg_vec_pos
=
position
/
po
->
frames_per_block
;
frame_offset
=
position
%
po
->
frames_per_block
;
frame
=
(
unsigned
long
)
(
po
->
pg_vec
[
pg_vec_pos
]
+
(
frame_offset
*
po
->
frame_size
));
return
frame
;
}
#endif
#define pkt_sk(__sk) ((struct packet_opt *)(__sk)->sk_protinfo)
#define pkt_sk(__sk) ((struct packet_opt *)(__sk)->sk_protinfo)
void
packet_sock_destruct
(
struct
sock
*
sk
)
void
packet_sock_destruct
(
struct
sock
*
sk
)
...
@@ -586,11 +605,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, struct pack
...
@@ -586,11 +605,11 @@ static int tpacket_rcv(struct sk_buff *skb, struct net_device *dev, struct pack
snaplen
=
skb
->
len
-
skb
->
data_len
;
snaplen
=
skb
->
len
-
skb
->
data_len
;
spin_lock
(
&
sk
->
sk_receive_queue
.
lock
);
spin_lock
(
&
sk
->
sk_receive_queue
.
lock
);
h
=
po
->
iovec
[
po
->
head
]
;
h
=
(
struct
tpacket_hdr
*
)
packet_lookup_frame
(
po
,
po
->
head
)
;
if
(
h
->
tp_status
)
if
(
h
->
tp_status
)
goto
ring_is_full
;
goto
ring_is_full
;
po
->
head
=
po
->
head
!=
po
->
iov
max
?
po
->
head
+
1
:
0
;
po
->
head
=
po
->
head
!=
po
->
frame_
max
?
po
->
head
+
1
:
0
;
po
->
stats
.
tp_packets
++
;
po
->
stats
.
tp_packets
++
;
if
(
copy_skb
)
{
if
(
copy_skb
)
{
status
|=
TP_STATUS_COPY
;
status
|=
TP_STATUS_COPY
;
...
@@ -1485,10 +1504,13 @@ unsigned int packet_poll(struct file * file, struct socket *sock, poll_table *wa
...
@@ -1485,10 +1504,13 @@ unsigned int packet_poll(struct file * file, struct socket *sock, poll_table *wa
unsigned
int
mask
=
datagram_poll
(
file
,
sock
,
wait
);
unsigned
int
mask
=
datagram_poll
(
file
,
sock
,
wait
);
spin_lock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
spin_lock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
if
(
po
->
iovec
)
{
if
(
po
->
pg_vec
)
{
unsigned
last
=
po
->
head
?
po
->
head
-
1
:
po
->
iovmax
;
unsigned
last
=
po
->
head
?
po
->
head
-
1
:
po
->
frame_max
;
struct
tpacket_hdr
*
h
;
h
=
(
struct
tpacket_hdr
*
)
packet_lookup_frame
(
po
,
last
);
if
(
po
->
iovec
[
last
]
->
tp_status
)
if
(
h
->
tp_status
)
mask
|=
POLLIN
|
POLLRDNORM
;
mask
|=
POLLIN
|
POLLRDNORM
;
}
}
spin_unlock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
spin_unlock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
...
@@ -1548,16 +1570,18 @@ static void free_pg_vec(unsigned long *pg_vec, unsigned order, unsigned len)
...
@@ -1548,16 +1570,18 @@ static void free_pg_vec(unsigned long *pg_vec, unsigned order, unsigned len)
static
int
packet_set_ring
(
struct
sock
*
sk
,
struct
tpacket_req
*
req
,
int
closing
)
static
int
packet_set_ring
(
struct
sock
*
sk
,
struct
tpacket_req
*
req
,
int
closing
)
{
{
unsigned
long
*
pg_vec
=
NULL
;
unsigned
long
*
pg_vec
=
NULL
;
struct
tpacket_hdr
**
io_vec
=
NULL
;
struct
packet_opt
*
po
=
pkt_sk
(
sk
);
struct
packet_opt
*
po
=
pkt_sk
(
sk
);
int
was_running
,
num
,
order
=
0
;
int
was_running
,
num
,
order
=
0
;
int
err
=
0
;
int
err
=
0
;
if
(
req
->
tp_block_nr
)
{
if
(
req
->
tp_block_nr
)
{
int
i
,
l
;
int
i
,
l
;
int
frames_per_block
;
/* Sanity tests and some calculations */
/* Sanity tests and some calculations */
if
(
po
->
pg_vec
)
return
-
EBUSY
;
if
((
int
)
req
->
tp_block_size
<=
0
)
if
((
int
)
req
->
tp_block_size
<=
0
)
return
-
EINVAL
;
return
-
EINVAL
;
if
(
req
->
tp_block_size
&
(
PAGE_SIZE
-
1
))
if
(
req
->
tp_block_size
&
(
PAGE_SIZE
-
1
))
...
@@ -1566,10 +1590,11 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
...
@@ -1566,10 +1590,11 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
return
-
EINVAL
;
return
-
EINVAL
;
if
(
req
->
tp_frame_size
&
(
TPACKET_ALIGNMENT
-
1
))
if
(
req
->
tp_frame_size
&
(
TPACKET_ALIGNMENT
-
1
))
return
-
EINVAL
;
return
-
EINVAL
;
frames_per_block
=
req
->
tp_block_size
/
req
->
tp_frame_size
;
if
(
frames_per_block
<=
0
)
po
->
frames_per_block
=
req
->
tp_block_size
/
req
->
tp_frame_size
;
if
(
po
->
frames_per_block
<=
0
)
return
-
EINVAL
;
return
-
EINVAL
;
if
(
frames_per_block
*
req
->
tp_block_nr
!=
req
->
tp_frame_nr
)
if
(
po
->
frames_per_block
*
req
->
tp_block_nr
!=
req
->
tp_frame_nr
)
return
-
EINVAL
;
return
-
EINVAL
;
/* OK! */
/* OK! */
...
@@ -1596,20 +1621,16 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
...
@@ -1596,20 +1621,16 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
}
}
/* Page vector is allocated */
/* Page vector is allocated */
/* Draw frames */
io_vec
=
kmalloc
(
req
->
tp_frame_nr
*
sizeof
(
struct
tpacket_hdr
*
),
GFP_KERNEL
);
if
(
io_vec
==
NULL
)
goto
out_free_pgvec
;
memset
(
io_vec
,
0
,
req
->
tp_frame_nr
*
sizeof
(
struct
tpacket_hdr
*
));
l
=
0
;
l
=
0
;
for
(
i
=
0
;
i
<
req
->
tp_block_nr
;
i
++
)
{
for
(
i
=
0
;
i
<
req
->
tp_block_nr
;
i
++
)
{
unsigned
long
ptr
=
pg_vec
[
i
];
unsigned
long
ptr
=
pg_vec
[
i
];
struct
tpacket_hdr
*
header
;
int
k
;
int
k
;
for
(
k
=
0
;
k
<
frames_per_block
;
k
++
,
l
++
)
{
for
(
k
=
0
;
k
<
po
->
frames_per_block
;
k
++
)
{
io_vec
[
l
]
=
(
struct
tpacket_hdr
*
)
ptr
;
io_vec
[
l
]
->
tp_status
=
TP_STATUS_KERNEL
;
header
=
(
struct
tpacket_hdr
*
)
ptr
;
header
->
tp_status
=
TP_STATUS_KERNEL
;
ptr
+=
req
->
tp_frame_size
;
ptr
+=
req
->
tp_frame_size
;
}
}
}
}
...
@@ -1642,8 +1663,7 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
...
@@ -1642,8 +1663,7 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
spin_lock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
spin_lock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
pg_vec
=
XC
(
po
->
pg_vec
,
pg_vec
);
pg_vec
=
XC
(
po
->
pg_vec
,
pg_vec
);
io_vec
=
XC
(
po
->
iovec
,
io_vec
);
po
->
frame_max
=
req
->
tp_frame_nr
-
1
;
po
->
iovmax
=
req
->
tp_frame_nr
-
1
;
po
->
head
=
0
;
po
->
head
=
0
;
po
->
frame_size
=
req
->
tp_frame_size
;
po
->
frame_size
=
req
->
tp_frame_size
;
spin_unlock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
spin_unlock_bh
(
&
sk
->
sk_receive_queue
.
lock
);
...
@@ -1652,7 +1672,7 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
...
@@ -1652,7 +1672,7 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
req
->
tp_block_nr
=
XC
(
po
->
pg_vec_len
,
req
->
tp_block_nr
);
req
->
tp_block_nr
=
XC
(
po
->
pg_vec_len
,
req
->
tp_block_nr
);
po
->
pg_vec_pages
=
req
->
tp_block_size
/
PAGE_SIZE
;
po
->
pg_vec_pages
=
req
->
tp_block_size
/
PAGE_SIZE
;
po
->
prot_hook
.
func
=
po
->
io
vec
?
tpacket_rcv
:
packet_rcv
;
po
->
prot_hook
.
func
=
po
->
pg_
vec
?
tpacket_rcv
:
packet_rcv
;
skb_queue_purge
(
&
sk
->
sk_receive_queue
);
skb_queue_purge
(
&
sk
->
sk_receive_queue
);
#undef XC
#undef XC
if
(
atomic_read
(
&
po
->
mapped
))
if
(
atomic_read
(
&
po
->
mapped
))
...
@@ -1670,9 +1690,6 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
...
@@ -1670,9 +1690,6 @@ static int packet_set_ring(struct sock *sk, struct tpacket_req *req, int closing
release_sock
(
sk
);
release_sock
(
sk
);
if
(
io_vec
)
kfree
(
io_vec
);
out_free_pgvec:
out_free_pgvec:
if
(
pg_vec
)
if
(
pg_vec
)
free_pg_vec
(
pg_vec
,
order
,
req
->
tp_block_nr
);
free_pg_vec
(
pg_vec
,
order
,
req
->
tp_block_nr
);
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment