Commit 5fccd64a authored by David S. Miller's avatar David S. Miller

Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next

Pablo Neira Ayuso says:

====================
Netfilter/IPVS updates for net-next

The following patchset contains a large Netfilter update for net-next,
to summarise:

1) Add support for stateful objects. This series provides a nf_tables
   native alternative to the extended accounting infrastructure for
   nf_tables. Two initial stateful objects are supported: counters and
   quotas. Objects are identified by a user-defined name, you can fetch
   and reset them anytime. You can also use a maps to allow fast lookups
   using any arbitrary key combination. More info at:

   http://marc.info/?l=netfilter-devel&m=148029128323837&w=2

2) On-demand registration of nf_conntrack and defrag hooks per netns.
   Register nf_conntrack hooks if we have a stateful ruleset, ie.
   state-based filtering or NAT. The new nf_conntrack_default_on sysctl
   enables this from newly created netnamespaces. Default behaviour is not
   modified. Patches from Florian Westphal.

3) Allocate 4k chunks and then use these for x_tables counter allocation
   requests, this improves ruleset load time and also datapath ruleset
   evaluation, patches from Florian Westphal.

4) Add support for ebpf to the existing x_tables bpf extension.
   From Willem de Bruijn.

5) Update layer 4 checksum if any of the pseudoheader fields is updated.
   This provides a limited form of 1:1 stateless NAT that make sense in
   specific scenario, eg. load balancing.

6) Add support to flush sets in nf_tables. This series comes with a new
   set->ops->deactivate_one() indirection given that we have to walk
   over the list of set elements, then deactivate them one by one.
   The existing set->ops->deactivate() performs an element lookup that
   we don't need.

7) Two patches to avoid cloning packets, thus speed up packet forwarding
   via nft_fwd from ingress. From Florian Westphal.

8) Two IPVS patches via Simon Horman: Decrement ttl in all modes to
   prevent infinite loops, patch from Dwip Banerjee. And one minor
   refactoring from Gao feng.

9) Revisit recent log support for nf_tables netdev families: One patch
   to ensure that we correctly handle non-ethernet packets. Another
   patch to add missing logger definition for netdev. Patches from
   Liping Zhang.

10) Three patches for nft_fib, one to address insufficient register
    initialization and another to solve incorrect (although harmless)
    byteswap operation. Moreover update xt_rpfilter and nft_fib to match
    lbcast packets with zeronet as source, eg. DHCP Discover packets
    (0.0.0.0 -> 255.255.255.255). Also from Liping Zhang.

11) Built-in DCCP, SCTP and UDPlite conntrack and NAT support, from
    Davide Caratti. While DCCP is rather hopeless lately, and UDPlite has
    been broken in many-cast mode for some little time, let's give them a
    chance by placing them at the same level as other existing protocols.
    Thus, users don't explicitly have to modprobe support for this and
    NAT rules work for them. Some people point to the lack of support in
    SOHO Linux-based routers that make deployment of new protocols harder.
    I guess other middleboxes outthere on the Internet are also to blame.
    Anyway, let's see if this has any impact in the midrun.

12) Skip software SCTP software checksum calculation if the NIC comes
    with SCTP checksum offload support. From Davide Caratti.

13) Initial core factoring to prepare conversion to hook array. Three
    patches from Aaron Conole.

14) Gao Feng made a wrong conversion to switch in the xt_multiport
    extension in a patch coming in the previous batch. Fix it in this
    batch.

15) Get vmalloc call in sync with kmalloc flags to avoid a warning
    and likely OOM killer intervention from x_tables. From Marcelo
    Ricardo Leitner.

16) Update Arturo Borrero's email address in all source code headers.
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 63c36c40 73c25fb1
...@@ -96,6 +96,17 @@ nf_conntrack_max - INTEGER ...@@ -96,6 +96,17 @@ nf_conntrack_max - INTEGER
Size of connection tracking table. Default value is Size of connection tracking table. Default value is
nf_conntrack_buckets value * 4. nf_conntrack_buckets value * 4.
nf_conntrack_default_on - BOOLEAN
0 - don't register conntrack in new net namespaces
1 - register conntrack in new net namespaces (default)
This controls wheter newly created network namespaces have connection
tracking enabled by default. It will be enabled automatically
regardless of this setting if the new net namespace requires
connection tracking, e.g. when NAT rules are created.
This setting is only visible in initial user namespace, it has no
effect on existing namespaces.
nf_conntrack_tcp_be_liberal - BOOLEAN nf_conntrack_tcp_be_liberal - BOOLEAN
0 - disabled (default) 0 - disabled (default)
not 0 - enabled not 0 - enabled
......
...@@ -75,10 +75,39 @@ struct nf_hook_ops { ...@@ -75,10 +75,39 @@ struct nf_hook_ops {
struct nf_hook_entry { struct nf_hook_entry {
struct nf_hook_entry __rcu *next; struct nf_hook_entry __rcu *next;
struct nf_hook_ops ops; nf_hookfn *hook;
void *priv;
const struct nf_hook_ops *orig_ops; const struct nf_hook_ops *orig_ops;
}; };
static inline void
nf_hook_entry_init(struct nf_hook_entry *entry, const struct nf_hook_ops *ops)
{
entry->next = NULL;
entry->hook = ops->hook;
entry->priv = ops->priv;
entry->orig_ops = ops;
}
static inline int
nf_hook_entry_priority(const struct nf_hook_entry *entry)
{
return entry->orig_ops->priority;
}
static inline int
nf_hook_entry_hookfn(const struct nf_hook_entry *entry, struct sk_buff *skb,
struct nf_hook_state *state)
{
return entry->hook(entry->priv, skb, state);
}
static inline const struct nf_hook_ops *
nf_hook_entry_ops(const struct nf_hook_entry *entry)
{
return entry->orig_ops;
}
static inline void nf_hook_state_init(struct nf_hook_state *p, static inline void nf_hook_state_init(struct nf_hook_state *p,
unsigned int hook, unsigned int hook,
u_int8_t pf, u_int8_t pf,
......
...@@ -25,7 +25,7 @@ enum ct_dccp_roles { ...@@ -25,7 +25,7 @@ enum ct_dccp_roles {
#define CT_DCCP_ROLE_MAX (__CT_DCCP_ROLE_MAX - 1) #define CT_DCCP_ROLE_MAX (__CT_DCCP_ROLE_MAX - 1)
#ifdef __KERNEL__ #ifdef __KERNEL__
#include <net/netfilter/nf_conntrack_tuple.h> #include <linux/netfilter/nf_conntrack_tuple_common.h>
struct nf_ct_dccp { struct nf_ct_dccp {
u_int8_t role[IP_CT_DIR_MAX]; u_int8_t role[IP_CT_DIR_MAX];
......
...@@ -403,38 +403,14 @@ static inline unsigned long ifname_compare_aligned(const char *_a, ...@@ -403,38 +403,14 @@ static inline unsigned long ifname_compare_aligned(const char *_a,
return ret; return ret;
} }
struct xt_percpu_counter_alloc_state {
unsigned int off;
const char __percpu *mem;
};
/* On SMP, ip(6)t_entry->counters.pcnt holds address of the bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
* real (percpu) counter. On !SMP, its just the packet count, struct xt_counters *counter);
* so nothing needs to be done there. void xt_percpu_counter_free(struct xt_counters *cnt);
*
* xt_percpu_counter_alloc returns the address of the percpu
* counter, or 0 on !SMP. We force an alignment of 16 bytes
* so that bytes/packets share a common cache line.
*
* Hence caller must use IS_ERR_VALUE to check for error, this
* allows us to return 0 for single core systems without forcing
* callers to deal with SMP vs. NONSMP issues.
*/
static inline unsigned long xt_percpu_counter_alloc(void)
{
if (nr_cpu_ids > 1) {
void __percpu *res = __alloc_percpu(sizeof(struct xt_counters),
sizeof(struct xt_counters));
if (res == NULL)
return -ENOMEM;
return (__force unsigned long) res;
}
return 0;
}
static inline void xt_percpu_counter_free(u64 pcnt)
{
if (nr_cpu_ids > 1)
free_percpu((void __percpu *) (unsigned long) pcnt);
}
static inline struct xt_counters * static inline struct xt_counters *
xt_get_this_cpu_counter(struct xt_counters *cnt) xt_get_this_cpu_counter(struct xt_counters *cnt)
......
...@@ -19,6 +19,7 @@ static inline int nf_hook_ingress(struct sk_buff *skb) ...@@ -19,6 +19,7 @@ static inline int nf_hook_ingress(struct sk_buff *skb)
{ {
struct nf_hook_entry *e = rcu_dereference(skb->dev->nf_hooks_ingress); struct nf_hook_entry *e = rcu_dereference(skb->dev->nf_hooks_ingress);
struct nf_hook_state state; struct nf_hook_state state;
int ret;
/* Must recheck the ingress hook head, in the event it became NULL /* Must recheck the ingress hook head, in the event it became NULL
* after the check in nf_hook_ingress_active evaluated to true. * after the check in nf_hook_ingress_active evaluated to true.
...@@ -29,7 +30,11 @@ static inline int nf_hook_ingress(struct sk_buff *skb) ...@@ -29,7 +30,11 @@ static inline int nf_hook_ingress(struct sk_buff *skb)
nf_hook_state_init(&state, NF_NETDEV_INGRESS, nf_hook_state_init(&state, NF_NETDEV_INGRESS,
NFPROTO_NETDEV, skb->dev, NULL, NULL, NFPROTO_NETDEV, skb->dev, NULL, NULL,
dev_net(skb->dev), NULL); dev_net(skb->dev), NULL);
return nf_hook_slow(skb, &state, e); ret = nf_hook_slow(skb, &state, e);
if (ret == 0)
return -1;
return ret;
} }
static inline void nf_hook_ingress_init(struct net_device *dev) static inline void nf_hook_ingress_init(struct net_device *dev)
......
...@@ -15,6 +15,15 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4; ...@@ -15,6 +15,15 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp4;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmp;
#ifdef CONFIG_NF_CT_PROTO_DCCP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp4;
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4;
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4;
#endif
int nf_conntrack_ipv4_compat_init(void); int nf_conntrack_ipv4_compat_init(void);
void nf_conntrack_ipv4_compat_fini(void); void nf_conntrack_ipv4_compat_fini(void);
......
#ifndef _NF_DEFRAG_IPV4_H #ifndef _NF_DEFRAG_IPV4_H
#define _NF_DEFRAG_IPV4_H #define _NF_DEFRAG_IPV4_H
void nf_defrag_ipv4_enable(void); struct net;
int nf_defrag_ipv4_enable(struct net *);
#endif /* _NF_DEFRAG_IPV4_H */ #endif /* _NF_DEFRAG_IPV4_H */
...@@ -6,6 +6,15 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6; ...@@ -6,6 +6,15 @@ extern struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_tcp6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udp6;
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6; extern struct nf_conntrack_l4proto nf_conntrack_l4proto_icmpv6;
#ifdef CONFIG_NF_CT_PROTO_DCCP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp6;
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6;
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
extern struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6;
#endif
#include <linux/sysctl.h> #include <linux/sysctl.h>
extern struct ctl_table nf_ct_ipv6_sysctl_table[]; extern struct ctl_table nf_ct_ipv6_sysctl_table[];
......
#ifndef _NF_DEFRAG_IPV6_H #ifndef _NF_DEFRAG_IPV6_H
#define _NF_DEFRAG_IPV6_H #define _NF_DEFRAG_IPV6_H
void nf_defrag_ipv6_enable(void); struct net;
int nf_defrag_ipv6_enable(struct net *);
int nf_ct_frag6_init(void); int nf_ct_frag6_init(void);
void nf_ct_frag6_cleanup(void); void nf_ct_frag6_cleanup(void);
......
...@@ -181,6 +181,10 @@ static inline void nf_ct_put(struct nf_conn *ct) ...@@ -181,6 +181,10 @@ static inline void nf_ct_put(struct nf_conn *ct)
int nf_ct_l3proto_try_module_get(unsigned short l3proto); int nf_ct_l3proto_try_module_get(unsigned short l3proto);
void nf_ct_l3proto_module_put(unsigned short l3proto); void nf_ct_l3proto_module_put(unsigned short l3proto);
/* load module; enable/disable conntrack in this namespace */
int nf_ct_netns_get(struct net *net, u8 nfproto);
void nf_ct_netns_put(struct net *net, u8 nfproto);
/* /*
* Allocate a hashtable of hlist_head (if nulls == 0), * Allocate a hashtable of hlist_head (if nulls == 0),
* or hlist_nulls_head (if nulls == 1) * or hlist_nulls_head (if nulls == 1)
......
...@@ -52,6 +52,10 @@ struct nf_conntrack_l3proto { ...@@ -52,6 +52,10 @@ struct nf_conntrack_l3proto {
int (*tuple_to_nlattr)(struct sk_buff *skb, int (*tuple_to_nlattr)(struct sk_buff *skb,
const struct nf_conntrack_tuple *t); const struct nf_conntrack_tuple *t);
/* Called when netns wants to use connection tracking */
int (*net_ns_get)(struct net *);
void (*net_ns_put)(struct net *);
/* /*
* Calculate size of tuple nlattr * Calculate size of tuple nlattr
*/ */
...@@ -63,18 +67,24 @@ struct nf_conntrack_l3proto { ...@@ -63,18 +67,24 @@ struct nf_conntrack_l3proto {
size_t nla_size; size_t nla_size;
/* Init l3proto pernet data */
int (*init_net)(struct net *net);
/* Module (if any) which this is connected to. */ /* Module (if any) which this is connected to. */
struct module *me; struct module *me;
}; };
extern struct nf_conntrack_l3proto __rcu *nf_ct_l3protos[AF_MAX]; extern struct nf_conntrack_l3proto __rcu *nf_ct_l3protos[AF_MAX];
#ifdef CONFIG_SYSCTL
/* Protocol pernet registration. */ /* Protocol pernet registration. */
int nf_ct_l3proto_pernet_register(struct net *net, int nf_ct_l3proto_pernet_register(struct net *net,
struct nf_conntrack_l3proto *proto); struct nf_conntrack_l3proto *proto);
#else
static inline int nf_ct_l3proto_pernet_register(struct net *n,
struct nf_conntrack_l3proto *p)
{
return 0;
}
#endif
void nf_ct_l3proto_pernet_unregister(struct net *net, void nf_ct_l3proto_pernet_unregister(struct net *net,
struct nf_conntrack_l3proto *proto); struct nf_conntrack_l3proto *proto);
......
...@@ -2,5 +2,6 @@ ...@@ -2,5 +2,6 @@
#define _NF_DUP_NETDEV_H_ #define _NF_DUP_NETDEV_H_
void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif); void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif);
void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif);
#endif #endif
...@@ -109,7 +109,9 @@ void nf_log_dump_packet_common(struct nf_log_buf *m, u_int8_t pf, ...@@ -109,7 +109,9 @@ void nf_log_dump_packet_common(struct nf_log_buf *m, u_int8_t pf,
const struct net_device *out, const struct net_device *out,
const struct nf_loginfo *loginfo, const struct nf_loginfo *loginfo,
const char *prefix); const char *prefix);
void nf_log_l2packet(struct net *net, u_int8_t pf, unsigned int hooknum, void nf_log_l2packet(struct net *net, u_int8_t pf,
__be16 protocol,
unsigned int hooknum,
const struct sk_buff *skb, const struct sk_buff *skb,
const struct net_device *in, const struct net_device *in,
const struct net_device *out, const struct net_device *out,
......
...@@ -54,6 +54,15 @@ extern const struct nf_nat_l4proto nf_nat_l4proto_udp; ...@@ -54,6 +54,15 @@ extern const struct nf_nat_l4proto nf_nat_l4proto_udp;
extern const struct nf_nat_l4proto nf_nat_l4proto_icmp; extern const struct nf_nat_l4proto nf_nat_l4proto_icmp;
extern const struct nf_nat_l4proto nf_nat_l4proto_icmpv6; extern const struct nf_nat_l4proto nf_nat_l4proto_icmpv6;
extern const struct nf_nat_l4proto nf_nat_l4proto_unknown; extern const struct nf_nat_l4proto nf_nat_l4proto_unknown;
#ifdef CONFIG_NF_NAT_PROTO_DCCP
extern const struct nf_nat_l4proto nf_nat_l4proto_dccp;
#endif
#ifdef CONFIG_NF_NAT_PROTO_SCTP
extern const struct nf_nat_l4proto nf_nat_l4proto_sctp;
#endif
#ifdef CONFIG_NF_NAT_PROTO_UDPLITE
extern const struct nf_nat_l4proto nf_nat_l4proto_udplite;
#endif
bool nf_nat_l4proto_in_range(const struct nf_conntrack_tuple *tuple, bool nf_nat_l4proto_in_range(const struct nf_conntrack_tuple *tuple,
enum nf_nat_manip_type maniptype, enum nf_nat_manip_type maniptype,
......
...@@ -259,7 +259,8 @@ struct nft_expr; ...@@ -259,7 +259,8 @@ struct nft_expr;
* @lookup: look up an element within the set * @lookup: look up an element within the set
* @insert: insert new element into set * @insert: insert new element into set
* @activate: activate new element in the next generation * @activate: activate new element in the next generation
* @deactivate: deactivate element in the next generation * @deactivate: lookup for element and deactivate it in the next generation
* @deactivate_one: deactivate element in the next generation
* @remove: remove element from set * @remove: remove element from set
* @walk: iterate over all set elemeennts * @walk: iterate over all set elemeennts
* @privsize: function to return size of set private data * @privsize: function to return size of set private data
...@@ -294,6 +295,9 @@ struct nft_set_ops { ...@@ -294,6 +295,9 @@ struct nft_set_ops {
void * (*deactivate)(const struct net *net, void * (*deactivate)(const struct net *net,
const struct nft_set *set, const struct nft_set *set,
const struct nft_set_elem *elem); const struct nft_set_elem *elem);
bool (*deactivate_one)(const struct net *net,
const struct nft_set *set,
void *priv);
void (*remove)(const struct nft_set *set, void (*remove)(const struct nft_set *set,
const struct nft_set_elem *elem); const struct nft_set_elem *elem);
void (*walk)(const struct nft_ctx *ctx, void (*walk)(const struct nft_ctx *ctx,
...@@ -326,6 +330,7 @@ void nft_unregister_set(struct nft_set_ops *ops); ...@@ -326,6 +330,7 @@ void nft_unregister_set(struct nft_set_ops *ops);
* @name: name of the set * @name: name of the set
* @ktype: key type (numeric type defined by userspace, not used in the kernel) * @ktype: key type (numeric type defined by userspace, not used in the kernel)
* @dtype: data type (verdict or numeric type defined by userspace) * @dtype: data type (verdict or numeric type defined by userspace)
* @objtype: object type (see NFT_OBJECT_* definitions)
* @size: maximum set size * @size: maximum set size
* @nelems: number of elements * @nelems: number of elements
* @ndeact: number of deactivated elements queued for removal * @ndeact: number of deactivated elements queued for removal
...@@ -347,6 +352,7 @@ struct nft_set { ...@@ -347,6 +352,7 @@ struct nft_set {
char name[NFT_SET_MAXNAMELEN]; char name[NFT_SET_MAXNAMELEN];
u32 ktype; u32 ktype;
u32 dtype; u32 dtype;
u32 objtype;
u32 size; u32 size;
atomic_t nelems; atomic_t nelems;
u32 ndeact; u32 ndeact;
...@@ -416,6 +422,7 @@ void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set, ...@@ -416,6 +422,7 @@ void nf_tables_unbind_set(const struct nft_ctx *ctx, struct nft_set *set,
* @NFT_SET_EXT_EXPIRATION: element expiration time * @NFT_SET_EXT_EXPIRATION: element expiration time
* @NFT_SET_EXT_USERDATA: user data associated with the element * @NFT_SET_EXT_USERDATA: user data associated with the element
* @NFT_SET_EXT_EXPR: expression assiociated with the element * @NFT_SET_EXT_EXPR: expression assiociated with the element
* @NFT_SET_EXT_OBJREF: stateful object reference associated with element
* @NFT_SET_EXT_NUM: number of extension types * @NFT_SET_EXT_NUM: number of extension types
*/ */
enum nft_set_extensions { enum nft_set_extensions {
...@@ -426,6 +433,7 @@ enum nft_set_extensions { ...@@ -426,6 +433,7 @@ enum nft_set_extensions {
NFT_SET_EXT_EXPIRATION, NFT_SET_EXT_EXPIRATION,
NFT_SET_EXT_USERDATA, NFT_SET_EXT_USERDATA,
NFT_SET_EXT_EXPR, NFT_SET_EXT_EXPR,
NFT_SET_EXT_OBJREF,
NFT_SET_EXT_NUM NFT_SET_EXT_NUM
}; };
...@@ -554,6 +562,11 @@ static inline struct nft_set_ext *nft_set_elem_ext(const struct nft_set *set, ...@@ -554,6 +562,11 @@ static inline struct nft_set_ext *nft_set_elem_ext(const struct nft_set *set,
return elem + set->ops->elemsize; return elem + set->ops->elemsize;
} }
static inline struct nft_object **nft_set_ext_obj(const struct nft_set_ext *ext)
{
return nft_set_ext(ext, NFT_SET_EXT_OBJREF);
}
void *nft_set_elem_init(const struct nft_set *set, void *nft_set_elem_init(const struct nft_set *set,
const struct nft_set_ext_tmpl *tmpl, const struct nft_set_ext_tmpl *tmpl,
const u32 *key, const u32 *data, const u32 *key, const u32 *data,
...@@ -875,6 +888,7 @@ unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv); ...@@ -875,6 +888,7 @@ unsigned int nft_do_chain(struct nft_pktinfo *pkt, void *priv);
* @list: used internally * @list: used internally
* @chains: chains in the table * @chains: chains in the table
* @sets: sets in the table * @sets: sets in the table
* @objects: stateful objects in the table
* @hgenerator: handle generator state * @hgenerator: handle generator state
* @use: number of chain references to this table * @use: number of chain references to this table
* @flags: table flag (see enum nft_table_flags) * @flags: table flag (see enum nft_table_flags)
...@@ -885,6 +899,7 @@ struct nft_table { ...@@ -885,6 +899,7 @@ struct nft_table {
struct list_head list; struct list_head list;
struct list_head chains; struct list_head chains;
struct list_head sets; struct list_head sets;
struct list_head objects;
u64 hgenerator; u64 hgenerator;
u32 use; u32 use;
u16 flags:14, u16 flags:14,
...@@ -934,6 +949,80 @@ void nft_unregister_expr(struct nft_expr_type *); ...@@ -934,6 +949,80 @@ void nft_unregister_expr(struct nft_expr_type *);
int nft_verdict_dump(struct sk_buff *skb, int type, int nft_verdict_dump(struct sk_buff *skb, int type,
const struct nft_verdict *v); const struct nft_verdict *v);
/**
* struct nft_object - nf_tables stateful object
*
* @list: table stateful object list node
* @table: table this object belongs to
* @type: pointer to object type
* @data: pointer to object data
* @name: name of this stateful object
* @genmask: generation mask
* @use: number of references to this stateful object
* @data: object data, layout depends on type
*/
struct nft_object {
struct list_head list;
char name[NFT_OBJ_MAXNAMELEN];
struct nft_table *table;
u32 genmask:2,
use:30;
/* runtime data below here */
const struct nft_object_type *type ____cacheline_aligned;
unsigned char data[]
__attribute__((aligned(__alignof__(u64))));
};
static inline void *nft_obj_data(const struct nft_object *obj)
{
return (void *)obj->data;
}
#define nft_expr_obj(expr) *((struct nft_object **)nft_expr_priv(expr))
struct nft_object *nf_tables_obj_lookup(const struct nft_table *table,
const struct nlattr *nla, u32 objtype,
u8 genmask);
int nft_obj_notify(struct net *net, struct nft_table *table,
struct nft_object *obj, u32 portid, u32 seq,
int event, int family, int report, gfp_t gfp);
/**
* struct nft_object_type - stateful object type
*
* @eval: stateful object evaluation function
* @list: list node in list of object types
* @type: stateful object numeric type
* @size: stateful object size
* @owner: module owner
* @maxattr: maximum netlink attribute
* @policy: netlink attribute policy
* @init: initialize object from netlink attributes
* @destroy: release existing stateful object
* @dump: netlink dump stateful object
*/
struct nft_object_type {
void (*eval)(struct nft_object *obj,
struct nft_regs *regs,
const struct nft_pktinfo *pkt);
struct list_head list;
u32 type;
unsigned int size;
unsigned int maxattr;
struct module *owner;
const struct nla_policy *policy;
int (*init)(const struct nlattr * const tb[],
struct nft_object *obj);
void (*destroy)(struct nft_object *obj);
int (*dump)(struct sk_buff *skb,
struct nft_object *obj,
bool reset);
};
int nft_register_obj(struct nft_object_type *obj_type);
void nft_unregister_obj(struct nft_object_type *obj_type);
/** /**
* struct nft_traceinfo - nft tracing information and state * struct nft_traceinfo - nft tracing information and state
* *
...@@ -981,6 +1070,9 @@ void nft_trace_notify(struct nft_traceinfo *info); ...@@ -981,6 +1070,9 @@ void nft_trace_notify(struct nft_traceinfo *info);
#define MODULE_ALIAS_NFT_SET() \ #define MODULE_ALIAS_NFT_SET() \
MODULE_ALIAS("nft-set") MODULE_ALIAS("nft-set")
#define MODULE_ALIAS_NFT_OBJ(type) \
MODULE_ALIAS("nft-obj-" __stringify(type))
/* /*
* The gencursor defines two generations, the currently active and the * The gencursor defines two generations, the currently active and the
* next one. Objects contain a bitmask of 2 bits specifying the generations * next one. Objects contain a bitmask of 2 bits specifying the generations
...@@ -1157,4 +1249,11 @@ struct nft_trans_elem { ...@@ -1157,4 +1249,11 @@ struct nft_trans_elem {
#define nft_trans_elem(trans) \ #define nft_trans_elem(trans) \
(((struct nft_trans_elem *)trans->data)->elem) (((struct nft_trans_elem *)trans->data)->elem)
struct nft_trans_obj {
struct nft_object *obj;
};
#define nft_trans_obj(trans) \
(((struct nft_trans_obj *)trans->data)->obj)
#endif /* _NET_NF_TABLES_H */ #endif /* _NET_NF_TABLES_H */
...@@ -45,6 +45,7 @@ struct nft_payload_set { ...@@ -45,6 +45,7 @@ struct nft_payload_set {
enum nft_registers sreg:8; enum nft_registers sreg:8;
u8 csum_type; u8 csum_type;
u8 csum_offset; u8 csum_offset;
u8 csum_flags;
}; };
extern const struct nft_expr_ops nft_payload_fast_ops; extern const struct nft_expr_ops nft_payload_fast_ops;
......
...@@ -6,6 +6,12 @@ ...@@ -6,6 +6,12 @@
#include <linux/atomic.h> #include <linux/atomic.h>
#include <linux/workqueue.h> #include <linux/workqueue.h>
#include <linux/netfilter/nf_conntrack_tcp.h> #include <linux/netfilter/nf_conntrack_tcp.h>
#ifdef CONFIG_NF_CT_PROTO_DCCP
#include <linux/netfilter/nf_conntrack_dccp.h>
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
#include <linux/netfilter/nf_conntrack_sctp.h>
#endif
#include <linux/seqlock.h> #include <linux/seqlock.h>
struct ctl_table_header; struct ctl_table_header;
...@@ -48,12 +54,49 @@ struct nf_icmp_net { ...@@ -48,12 +54,49 @@ struct nf_icmp_net {
unsigned int timeout; unsigned int timeout;
}; };
#ifdef CONFIG_NF_CT_PROTO_DCCP
struct nf_dccp_net {
struct nf_proto_net pn;
int dccp_loose;
unsigned int dccp_timeout[CT_DCCP_MAX + 1];
};
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
struct nf_sctp_net {
struct nf_proto_net pn;
unsigned int timeouts[SCTP_CONNTRACK_MAX];
};
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
enum udplite_conntrack {
UDPLITE_CT_UNREPLIED,
UDPLITE_CT_REPLIED,
UDPLITE_CT_MAX
};
struct nf_udplite_net {
struct nf_proto_net pn;
unsigned int timeouts[UDPLITE_CT_MAX];
};
#endif
struct nf_ip_net { struct nf_ip_net {
struct nf_generic_net generic; struct nf_generic_net generic;
struct nf_tcp_net tcp; struct nf_tcp_net tcp;
struct nf_udp_net udp; struct nf_udp_net udp;
struct nf_icmp_net icmp; struct nf_icmp_net icmp;
struct nf_icmp_net icmpv6; struct nf_icmp_net icmpv6;
#ifdef CONFIG_NF_CT_PROTO_DCCP
struct nf_dccp_net dccp;
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
struct nf_sctp_net sctp;
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
struct nf_udplite_net udplite;
#endif
}; };
struct ct_pcpu { struct ct_pcpu {
......
...@@ -17,5 +17,11 @@ struct netns_nf { ...@@ -17,5 +17,11 @@ struct netns_nf {
struct ctl_table_header *nf_log_dir_header; struct ctl_table_header *nf_log_dir_header;
#endif #endif
struct nf_hook_entry __rcu *hooks[NFPROTO_NUMPROTO][NF_MAX_HOOKS]; struct nf_hook_entry __rcu *hooks[NFPROTO_NUMPROTO][NF_MAX_HOOKS];
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV4)
bool defrag_ipv4;
#endif
#if IS_ENABLED(CONFIG_NF_DEFRAG_IPV6)
bool defrag_ipv6;
#endif
}; };
#endif #endif
...@@ -2,7 +2,10 @@ ...@@ -2,7 +2,10 @@
#define _NF_CONNTRACK_TUPLE_COMMON_H #define _NF_CONNTRACK_TUPLE_COMMON_H
#include <linux/types.h> #include <linux/types.h>
#ifndef __KERNEL__
#include <linux/netfilter.h> #include <linux/netfilter.h>
#endif
#include <linux/netfilter/nf_conntrack_common.h> /* IP_CT_IS_REPLY */
enum ip_conntrack_dir { enum ip_conntrack_dir {
IP_CT_DIR_ORIGINAL, IP_CT_DIR_ORIGINAL,
......
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
#define NFT_TABLE_MAXNAMELEN 32 #define NFT_TABLE_MAXNAMELEN 32
#define NFT_CHAIN_MAXNAMELEN 32 #define NFT_CHAIN_MAXNAMELEN 32
#define NFT_SET_MAXNAMELEN 32 #define NFT_SET_MAXNAMELEN 32
#define NFT_OBJ_MAXNAMELEN 32
#define NFT_USERDATA_MAXLEN 256 #define NFT_USERDATA_MAXLEN 256
/** /**
...@@ -85,6 +86,10 @@ enum nft_verdicts { ...@@ -85,6 +86,10 @@ enum nft_verdicts {
* @NFT_MSG_NEWGEN: announce a new generation, only for events (enum nft_gen_attributes) * @NFT_MSG_NEWGEN: announce a new generation, only for events (enum nft_gen_attributes)
* @NFT_MSG_GETGEN: get the rule-set generation (enum nft_gen_attributes) * @NFT_MSG_GETGEN: get the rule-set generation (enum nft_gen_attributes)
* @NFT_MSG_TRACE: trace event (enum nft_trace_attributes) * @NFT_MSG_TRACE: trace event (enum nft_trace_attributes)
* @NFT_MSG_NEWOBJ: create a stateful object (enum nft_obj_attributes)
* @NFT_MSG_GETOBJ: get a stateful object (enum nft_obj_attributes)
* @NFT_MSG_DELOBJ: delete a stateful object (enum nft_obj_attributes)
* @NFT_MSG_GETOBJ_RESET: get and reset a stateful object (enum nft_obj_attributes)
*/ */
enum nf_tables_msg_types { enum nf_tables_msg_types {
NFT_MSG_NEWTABLE, NFT_MSG_NEWTABLE,
...@@ -105,6 +110,10 @@ enum nf_tables_msg_types { ...@@ -105,6 +110,10 @@ enum nf_tables_msg_types {
NFT_MSG_NEWGEN, NFT_MSG_NEWGEN,
NFT_MSG_GETGEN, NFT_MSG_GETGEN,
NFT_MSG_TRACE, NFT_MSG_TRACE,
NFT_MSG_NEWOBJ,
NFT_MSG_GETOBJ,
NFT_MSG_DELOBJ,
NFT_MSG_GETOBJ_RESET,
NFT_MSG_MAX, NFT_MSG_MAX,
}; };
...@@ -246,6 +255,7 @@ enum nft_rule_compat_attributes { ...@@ -246,6 +255,7 @@ enum nft_rule_compat_attributes {
* @NFT_SET_MAP: set is used as a dictionary * @NFT_SET_MAP: set is used as a dictionary
* @NFT_SET_TIMEOUT: set uses timeouts * @NFT_SET_TIMEOUT: set uses timeouts
* @NFT_SET_EVAL: set contains expressions for evaluation * @NFT_SET_EVAL: set contains expressions for evaluation
* @NFT_SET_OBJECT: set contains stateful objects
*/ */
enum nft_set_flags { enum nft_set_flags {
NFT_SET_ANONYMOUS = 0x1, NFT_SET_ANONYMOUS = 0x1,
...@@ -254,6 +264,7 @@ enum nft_set_flags { ...@@ -254,6 +264,7 @@ enum nft_set_flags {
NFT_SET_MAP = 0x8, NFT_SET_MAP = 0x8,
NFT_SET_TIMEOUT = 0x10, NFT_SET_TIMEOUT = 0x10,
NFT_SET_EVAL = 0x20, NFT_SET_EVAL = 0x20,
NFT_SET_OBJECT = 0x40,
}; };
/** /**
...@@ -295,6 +306,7 @@ enum nft_set_desc_attributes { ...@@ -295,6 +306,7 @@ enum nft_set_desc_attributes {
* @NFTA_SET_TIMEOUT: default timeout value (NLA_U64) * @NFTA_SET_TIMEOUT: default timeout value (NLA_U64)
* @NFTA_SET_GC_INTERVAL: garbage collection interval (NLA_U32) * @NFTA_SET_GC_INTERVAL: garbage collection interval (NLA_U32)
* @NFTA_SET_USERDATA: user data (NLA_BINARY) * @NFTA_SET_USERDATA: user data (NLA_BINARY)
* @NFTA_SET_OBJ_TYPE: stateful object type (NLA_U32: NFT_OBJECT_*)
*/ */
enum nft_set_attributes { enum nft_set_attributes {
NFTA_SET_UNSPEC, NFTA_SET_UNSPEC,
...@@ -312,6 +324,7 @@ enum nft_set_attributes { ...@@ -312,6 +324,7 @@ enum nft_set_attributes {
NFTA_SET_GC_INTERVAL, NFTA_SET_GC_INTERVAL,
NFTA_SET_USERDATA, NFTA_SET_USERDATA,
NFTA_SET_PAD, NFTA_SET_PAD,
NFTA_SET_OBJ_TYPE,
__NFTA_SET_MAX __NFTA_SET_MAX
}; };
#define NFTA_SET_MAX (__NFTA_SET_MAX - 1) #define NFTA_SET_MAX (__NFTA_SET_MAX - 1)
...@@ -335,6 +348,7 @@ enum nft_set_elem_flags { ...@@ -335,6 +348,7 @@ enum nft_set_elem_flags {
* @NFTA_SET_ELEM_EXPIRATION: expiration time (NLA_U64) * @NFTA_SET_ELEM_EXPIRATION: expiration time (NLA_U64)
* @NFTA_SET_ELEM_USERDATA: user data (NLA_BINARY) * @NFTA_SET_ELEM_USERDATA: user data (NLA_BINARY)
* @NFTA_SET_ELEM_EXPR: expression (NLA_NESTED: nft_expr_attributes) * @NFTA_SET_ELEM_EXPR: expression (NLA_NESTED: nft_expr_attributes)
* @NFTA_SET_ELEM_OBJREF: stateful object reference (NLA_STRING)
*/ */
enum nft_set_elem_attributes { enum nft_set_elem_attributes {
NFTA_SET_ELEM_UNSPEC, NFTA_SET_ELEM_UNSPEC,
...@@ -346,6 +360,7 @@ enum nft_set_elem_attributes { ...@@ -346,6 +360,7 @@ enum nft_set_elem_attributes {
NFTA_SET_ELEM_USERDATA, NFTA_SET_ELEM_USERDATA,
NFTA_SET_ELEM_EXPR, NFTA_SET_ELEM_EXPR,
NFTA_SET_ELEM_PAD, NFTA_SET_ELEM_PAD,
NFTA_SET_ELEM_OBJREF,
__NFTA_SET_ELEM_MAX __NFTA_SET_ELEM_MAX
}; };
#define NFTA_SET_ELEM_MAX (__NFTA_SET_ELEM_MAX - 1) #define NFTA_SET_ELEM_MAX (__NFTA_SET_ELEM_MAX - 1)
...@@ -659,6 +674,10 @@ enum nft_payload_csum_types { ...@@ -659,6 +674,10 @@ enum nft_payload_csum_types {
NFT_PAYLOAD_CSUM_INET, NFT_PAYLOAD_CSUM_INET,
}; };
enum nft_payload_csum_flags {
NFT_PAYLOAD_L4CSUM_PSEUDOHDR = (1 << 0),
};
/** /**
* enum nft_payload_attributes - nf_tables payload expression netlink attributes * enum nft_payload_attributes - nf_tables payload expression netlink attributes
* *
...@@ -669,6 +688,7 @@ enum nft_payload_csum_types { ...@@ -669,6 +688,7 @@ enum nft_payload_csum_types {
* @NFTA_PAYLOAD_SREG: source register to load data from (NLA_U32: nft_registers) * @NFTA_PAYLOAD_SREG: source register to load data from (NLA_U32: nft_registers)
* @NFTA_PAYLOAD_CSUM_TYPE: checksum type (NLA_U32) * @NFTA_PAYLOAD_CSUM_TYPE: checksum type (NLA_U32)
* @NFTA_PAYLOAD_CSUM_OFFSET: checksum offset relative to base (NLA_U32) * @NFTA_PAYLOAD_CSUM_OFFSET: checksum offset relative to base (NLA_U32)
* @NFTA_PAYLOAD_CSUM_FLAGS: checksum flags (NLA_U32)
*/ */
enum nft_payload_attributes { enum nft_payload_attributes {
NFTA_PAYLOAD_UNSPEC, NFTA_PAYLOAD_UNSPEC,
...@@ -679,6 +699,7 @@ enum nft_payload_attributes { ...@@ -679,6 +699,7 @@ enum nft_payload_attributes {
NFTA_PAYLOAD_SREG, NFTA_PAYLOAD_SREG,
NFTA_PAYLOAD_CSUM_TYPE, NFTA_PAYLOAD_CSUM_TYPE,
NFTA_PAYLOAD_CSUM_OFFSET, NFTA_PAYLOAD_CSUM_OFFSET,
NFTA_PAYLOAD_CSUM_FLAGS,
__NFTA_PAYLOAD_MAX __NFTA_PAYLOAD_MAX
}; };
#define NFTA_PAYLOAD_MAX (__NFTA_PAYLOAD_MAX - 1) #define NFTA_PAYLOAD_MAX (__NFTA_PAYLOAD_MAX - 1)
...@@ -968,6 +989,7 @@ enum nft_queue_attributes { ...@@ -968,6 +989,7 @@ enum nft_queue_attributes {
enum nft_quota_flags { enum nft_quota_flags {
NFT_QUOTA_F_INV = (1 << 0), NFT_QUOTA_F_INV = (1 << 0),
NFT_QUOTA_F_DEPLETED = (1 << 1),
}; };
/** /**
...@@ -975,12 +997,14 @@ enum nft_quota_flags { ...@@ -975,12 +997,14 @@ enum nft_quota_flags {
* *
* @NFTA_QUOTA_BYTES: quota in bytes (NLA_U16) * @NFTA_QUOTA_BYTES: quota in bytes (NLA_U16)
* @NFTA_QUOTA_FLAGS: flags (NLA_U32) * @NFTA_QUOTA_FLAGS: flags (NLA_U32)
* @NFTA_QUOTA_CONSUMED: quota already consumed in bytes (NLA_U64)
*/ */
enum nft_quota_attributes { enum nft_quota_attributes {
NFTA_QUOTA_UNSPEC, NFTA_QUOTA_UNSPEC,
NFTA_QUOTA_BYTES, NFTA_QUOTA_BYTES,
NFTA_QUOTA_FLAGS, NFTA_QUOTA_FLAGS,
NFTA_QUOTA_PAD, NFTA_QUOTA_PAD,
NFTA_QUOTA_CONSUMED,
__NFTA_QUOTA_MAX __NFTA_QUOTA_MAX
}; };
#define NFTA_QUOTA_MAX (__NFTA_QUOTA_MAX - 1) #define NFTA_QUOTA_MAX (__NFTA_QUOTA_MAX - 1)
...@@ -1124,6 +1148,26 @@ enum nft_fwd_attributes { ...@@ -1124,6 +1148,26 @@ enum nft_fwd_attributes {
}; };
#define NFTA_FWD_MAX (__NFTA_FWD_MAX - 1) #define NFTA_FWD_MAX (__NFTA_FWD_MAX - 1)
/**
* enum nft_objref_attributes - nf_tables stateful object expression netlink attributes
*
* @NFTA_OBJREF_IMM_TYPE: object type for immediate reference (NLA_U32: nft_register)
* @NFTA_OBJREF_IMM_NAME: object name for immediate reference (NLA_STRING)
* @NFTA_OBJREF_SET_SREG: source register of the data to look for (NLA_U32: nft_registers)
* @NFTA_OBJREF_SET_NAME: name of the set where to look for (NLA_STRING)
* @NFTA_OBJREF_SET_ID: id of the set where to look for in this transaction (NLA_U32)
*/
enum nft_objref_attributes {
NFTA_OBJREF_UNSPEC,
NFTA_OBJREF_IMM_TYPE,
NFTA_OBJREF_IMM_NAME,
NFTA_OBJREF_SET_SREG,
NFTA_OBJREF_SET_NAME,
NFTA_OBJREF_SET_ID,
__NFTA_OBJREF_MAX
};
#define NFTA_OBJREF_MAX (__NFTA_OBJREF_MAX - 1)
/** /**
* enum nft_gen_attributes - nf_tables ruleset generation attributes * enum nft_gen_attributes - nf_tables ruleset generation attributes
* *
...@@ -1172,6 +1216,32 @@ enum nft_fib_flags { ...@@ -1172,6 +1216,32 @@ enum nft_fib_flags {
NFTA_FIB_F_OIF = 1 << 4, /* restrict to oif */ NFTA_FIB_F_OIF = 1 << 4, /* restrict to oif */
}; };
#define NFT_OBJECT_UNSPEC 0
#define NFT_OBJECT_COUNTER 1
#define NFT_OBJECT_QUOTA 2
#define __NFT_OBJECT_MAX 3
#define NFT_OBJECT_MAX (__NFT_OBJECT_MAX - 1)
/**
* enum nft_object_attributes - nf_tables stateful object netlink attributes
*
* @NFTA_OBJ_TABLE: name of the table containing the expression (NLA_STRING)
* @NFTA_OBJ_NAME: name of this expression type (NLA_STRING)
* @NFTA_OBJ_TYPE: stateful object type (NLA_U32)
* @NFTA_OBJ_DATA: stateful object data (NLA_NESTED)
* @NFTA_OBJ_USE: number of references to this expression (NLA_U32)
*/
enum nft_object_attributes {
NFTA_OBJ_UNSPEC,
NFTA_OBJ_TABLE,
NFTA_OBJ_NAME,
NFTA_OBJ_TYPE,
NFTA_OBJ_DATA,
NFTA_OBJ_USE,
__NFTA_OBJ_MAX
};
#define NFTA_OBJ_MAX (__NFTA_OBJ_MAX - 1)
/** /**
* enum nft_trace_attributes - nf_tables trace netlink attributes * enum nft_trace_attributes - nf_tables trace netlink attributes
* *
......
...@@ -2,9 +2,11 @@ ...@@ -2,9 +2,11 @@
#define _XT_BPF_H #define _XT_BPF_H
#include <linux/filter.h> #include <linux/filter.h>
#include <linux/limits.h>
#include <linux/types.h> #include <linux/types.h>
#define XT_BPF_MAX_NUM_INSTR 64 #define XT_BPF_MAX_NUM_INSTR 64
#define XT_BPF_PATH_MAX (XT_BPF_MAX_NUM_INSTR * sizeof(struct sock_filter))
struct bpf_prog; struct bpf_prog;
...@@ -16,4 +18,23 @@ struct xt_bpf_info { ...@@ -16,4 +18,23 @@ struct xt_bpf_info {
struct bpf_prog *filter __attribute__((aligned(8))); struct bpf_prog *filter __attribute__((aligned(8)));
}; };
enum xt_bpf_modes {
XT_BPF_MODE_BYTECODE,
XT_BPF_MODE_FD_PINNED,
XT_BPF_MODE_FD_ELF,
};
struct xt_bpf_info_v1 {
__u16 mode;
__u16 bpf_program_num_elem;
__s32 fd;
union {
struct sock_filter bpf_program[XT_BPF_MAX_NUM_INSTR];
char path[XT_BPF_PATH_MAX];
};
/* only used in the kernel */
struct bpf_prog *filter __attribute__((aligned(8)));
};
#endif /*_XT_BPF_H */ #endif /*_XT_BPF_H */
...@@ -1008,10 +1008,10 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net, ...@@ -1008,10 +1008,10 @@ int br_nf_hook_thresh(unsigned int hook, struct net *net,
struct nf_hook_state state; struct nf_hook_state state;
int ret; int ret;
elem = rcu_dereference(net->nf.hooks[NFPROTO_BRIDGE][hook]); for (elem = rcu_dereference(net->nf.hooks[NFPROTO_BRIDGE][hook]);
elem && nf_hook_entry_priority(elem) <= NF_BR_PRI_BRNF;
while (elem && (elem->ops.priority <= NF_BR_PRI_BRNF)) elem = rcu_dereference(elem->next))
elem = rcu_dereference(elem->next); ;
if (!elem) if (!elem)
return okfn(net, sk, skb); return okfn(net, sk, skb);
......
...@@ -24,7 +24,8 @@ static void nf_log_bridge_packet(struct net *net, u_int8_t pf, ...@@ -24,7 +24,8 @@ static void nf_log_bridge_packet(struct net *net, u_int8_t pf,
const struct nf_loginfo *loginfo, const struct nf_loginfo *loginfo,
const char *prefix) const char *prefix)
{ {
nf_log_l2packet(net, pf, hooknum, skb, in, out, loginfo, prefix); nf_log_l2packet(net, pf, eth_hdr(skb)->h_proto, hooknum, skb,
in, out, loginfo, prefix);
} }
static struct nf_logger nf_bridge_logger __read_mostly = { static struct nf_logger nf_bridge_logger __read_mostly = {
......
...@@ -411,17 +411,15 @@ static inline int check_target(struct arpt_entry *e, const char *name) ...@@ -411,17 +411,15 @@ static inline int check_target(struct arpt_entry *e, const char *name)
} }
static inline int static inline int
find_check_entry(struct arpt_entry *e, const char *name, unsigned int size) find_check_entry(struct arpt_entry *e, const char *name, unsigned int size,
struct xt_percpu_counter_alloc_state *alloc_state)
{ {
struct xt_entry_target *t; struct xt_entry_target *t;
struct xt_target *target; struct xt_target *target;
unsigned long pcnt;
int ret; int ret;
pcnt = xt_percpu_counter_alloc(); if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
if (IS_ERR_VALUE(pcnt))
return -ENOMEM; return -ENOMEM;
e->counters.pcnt = pcnt;
t = arpt_get_target(e); t = arpt_get_target(e);
target = xt_request_find_target(NFPROTO_ARP, t->u.user.name, target = xt_request_find_target(NFPROTO_ARP, t->u.user.name,
...@@ -439,7 +437,7 @@ find_check_entry(struct arpt_entry *e, const char *name, unsigned int size) ...@@ -439,7 +437,7 @@ find_check_entry(struct arpt_entry *e, const char *name, unsigned int size)
err: err:
module_put(t->u.kernel.target->me); module_put(t->u.kernel.target->me);
out: out:
xt_percpu_counter_free(e->counters.pcnt); xt_percpu_counter_free(&e->counters);
return ret; return ret;
} }
...@@ -519,7 +517,7 @@ static inline void cleanup_entry(struct arpt_entry *e) ...@@ -519,7 +517,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
if (par.target->destroy != NULL) if (par.target->destroy != NULL)
par.target->destroy(&par); par.target->destroy(&par);
module_put(par.target->me); module_put(par.target->me);
xt_percpu_counter_free(e->counters.pcnt); xt_percpu_counter_free(&e->counters);
} }
/* Checks and translates the user-supplied table segment (held in /* Checks and translates the user-supplied table segment (held in
...@@ -528,6 +526,7 @@ static inline void cleanup_entry(struct arpt_entry *e) ...@@ -528,6 +526,7 @@ static inline void cleanup_entry(struct arpt_entry *e)
static int translate_table(struct xt_table_info *newinfo, void *entry0, static int translate_table(struct xt_table_info *newinfo, void *entry0,
const struct arpt_replace *repl) const struct arpt_replace *repl)
{ {
struct xt_percpu_counter_alloc_state alloc_state = { 0 };
struct arpt_entry *iter; struct arpt_entry *iter;
unsigned int *offsets; unsigned int *offsets;
unsigned int i; unsigned int i;
...@@ -590,7 +589,8 @@ static int translate_table(struct xt_table_info *newinfo, void *entry0, ...@@ -590,7 +589,8 @@ static int translate_table(struct xt_table_info *newinfo, void *entry0,
/* Finally, each sanity check must pass */ /* Finally, each sanity check must pass */
i = 0; i = 0;
xt_entry_foreach(iter, entry0, newinfo->size) { xt_entry_foreach(iter, entry0, newinfo->size) {
ret = find_check_entry(iter, repl->name, repl->size); ret = find_check_entry(iter, repl->name, repl->size,
&alloc_state);
if (ret != 0) if (ret != 0)
break; break;
++i; ++i;
......
...@@ -531,7 +531,8 @@ static int check_target(struct ipt_entry *e, struct net *net, const char *name) ...@@ -531,7 +531,8 @@ static int check_target(struct ipt_entry *e, struct net *net, const char *name)
static int static int
find_check_entry(struct ipt_entry *e, struct net *net, const char *name, find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
unsigned int size) unsigned int size,
struct xt_percpu_counter_alloc_state *alloc_state)
{ {
struct xt_entry_target *t; struct xt_entry_target *t;
struct xt_target *target; struct xt_target *target;
...@@ -539,12 +540,9 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name, ...@@ -539,12 +540,9 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
unsigned int j; unsigned int j;
struct xt_mtchk_param mtpar; struct xt_mtchk_param mtpar;
struct xt_entry_match *ematch; struct xt_entry_match *ematch;
unsigned long pcnt;
pcnt = xt_percpu_counter_alloc(); if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
if (IS_ERR_VALUE(pcnt))
return -ENOMEM; return -ENOMEM;
e->counters.pcnt = pcnt;
j = 0; j = 0;
mtpar.net = net; mtpar.net = net;
...@@ -582,7 +580,7 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name, ...@@ -582,7 +580,7 @@ find_check_entry(struct ipt_entry *e, struct net *net, const char *name,
cleanup_match(ematch, net); cleanup_match(ematch, net);
} }
xt_percpu_counter_free(e->counters.pcnt); xt_percpu_counter_free(&e->counters);
return ret; return ret;
} }
...@@ -670,7 +668,7 @@ cleanup_entry(struct ipt_entry *e, struct net *net) ...@@ -670,7 +668,7 @@ cleanup_entry(struct ipt_entry *e, struct net *net)
if (par.target->destroy != NULL) if (par.target->destroy != NULL)
par.target->destroy(&par); par.target->destroy(&par);
module_put(par.target->me); module_put(par.target->me);
xt_percpu_counter_free(e->counters.pcnt); xt_percpu_counter_free(&e->counters);
} }
/* Checks and translates the user-supplied table segment (held in /* Checks and translates the user-supplied table segment (held in
...@@ -679,6 +677,7 @@ static int ...@@ -679,6 +677,7 @@ static int
translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0, translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
const struct ipt_replace *repl) const struct ipt_replace *repl)
{ {
struct xt_percpu_counter_alloc_state alloc_state = { 0 };
struct ipt_entry *iter; struct ipt_entry *iter;
unsigned int *offsets; unsigned int *offsets;
unsigned int i; unsigned int i;
...@@ -738,7 +737,8 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0, ...@@ -738,7 +737,8 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
/* Finally, each sanity check must pass */ /* Finally, each sanity check must pass */
i = 0; i = 0;
xt_entry_foreach(iter, entry0, newinfo->size) { xt_entry_foreach(iter, entry0, newinfo->size) {
ret = find_check_entry(iter, net, repl->name, repl->size); ret = find_check_entry(iter, net, repl->name, repl->size,
&alloc_state);
if (ret != 0) if (ret != 0)
break; break;
++i; ++i;
......
...@@ -419,7 +419,7 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par) ...@@ -419,7 +419,7 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
} }
cipinfo->config = config; cipinfo->config = config;
ret = nf_ct_l3proto_try_module_get(par->family); ret = nf_ct_netns_get(par->net, par->family);
if (ret < 0) if (ret < 0)
pr_info("cannot load conntrack support for proto=%u\n", pr_info("cannot load conntrack support for proto=%u\n",
par->family); par->family);
...@@ -444,7 +444,7 @@ static void clusterip_tg_destroy(const struct xt_tgdtor_param *par) ...@@ -444,7 +444,7 @@ static void clusterip_tg_destroy(const struct xt_tgdtor_param *par)
clusterip_config_put(cipinfo->config); clusterip_config_put(cipinfo->config);
nf_ct_l3proto_module_put(par->family); nf_ct_netns_get(par->net, par->family);
} }
#ifdef CONFIG_COMPAT #ifdef CONFIG_COMPAT
......
...@@ -41,7 +41,7 @@ static int masquerade_tg_check(const struct xt_tgchk_param *par) ...@@ -41,7 +41,7 @@ static int masquerade_tg_check(const struct xt_tgchk_param *par)
pr_debug("bad rangesize %u\n", mr->rangesize); pr_debug("bad rangesize %u\n", mr->rangesize);
return -EINVAL; return -EINVAL;
} }
return 0; return nf_ct_netns_get(par->net, par->family);
} }
static unsigned int static unsigned int
...@@ -59,6 +59,11 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -59,6 +59,11 @@ masquerade_tg(struct sk_buff *skb, const struct xt_action_param *par)
xt_out(par)); xt_out(par));
} }
static void masquerade_tg_destroy(const struct xt_tgdtor_param *par)
{
nf_ct_netns_put(par->net, par->family);
}
static struct xt_target masquerade_tg_reg __read_mostly = { static struct xt_target masquerade_tg_reg __read_mostly = {
.name = "MASQUERADE", .name = "MASQUERADE",
.family = NFPROTO_IPV4, .family = NFPROTO_IPV4,
...@@ -67,6 +72,7 @@ static struct xt_target masquerade_tg_reg __read_mostly = { ...@@ -67,6 +72,7 @@ static struct xt_target masquerade_tg_reg __read_mostly = {
.table = "nat", .table = "nat",
.hooks = 1 << NF_INET_POST_ROUTING, .hooks = 1 << NF_INET_POST_ROUTING,
.checkentry = masquerade_tg_check, .checkentry = masquerade_tg_check,
.destroy = masquerade_tg_destroy,
.me = THIS_MODULE, .me = THIS_MODULE,
}; };
......
...@@ -418,12 +418,12 @@ static int synproxy_tg4_check(const struct xt_tgchk_param *par) ...@@ -418,12 +418,12 @@ static int synproxy_tg4_check(const struct xt_tgchk_param *par)
e->ip.invflags & XT_INV_PROTO) e->ip.invflags & XT_INV_PROTO)
return -EINVAL; return -EINVAL;
return nf_ct_l3proto_try_module_get(par->family); return nf_ct_netns_get(par->net, par->family);
} }
static void synproxy_tg4_destroy(const struct xt_tgdtor_param *par) static void synproxy_tg4_destroy(const struct xt_tgdtor_param *par)
{ {
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
} }
static struct xt_target synproxy_tg4_reg __read_mostly = { static struct xt_target synproxy_tg4_reg __read_mostly = {
......
...@@ -83,10 +83,12 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par) ...@@ -83,10 +83,12 @@ static bool rpfilter_mt(const struct sk_buff *skb, struct xt_action_param *par)
return true ^ invert; return true ^ invert;
iph = ip_hdr(skb); iph = ip_hdr(skb);
if (ipv4_is_multicast(iph->daddr)) { if (ipv4_is_zeronet(iph->saddr)) {
if (ipv4_is_zeronet(iph->saddr)) if (ipv4_is_lbcast(iph->daddr) ||
return ipv4_is_local_multicast(iph->daddr) ^ invert; ipv4_is_local_multicast(iph->daddr))
return true ^ invert;
} }
flow.flowi4_iif = LOOPBACK_IFINDEX; flow.flowi4_iif = LOOPBACK_IFINDEX;
flow.daddr = iph->saddr; flow.daddr = iph->saddr;
flow.saddr = rpfilter_get_saddr(iph->daddr); flow.saddr = rpfilter_get_saddr(iph->daddr);
......
...@@ -31,6 +31,13 @@ ...@@ -31,6 +31,13 @@
#include <net/netfilter/ipv4/nf_defrag_ipv4.h> #include <net/netfilter/ipv4/nf_defrag_ipv4.h>
#include <net/netfilter/nf_log.h> #include <net/netfilter/nf_log.h>
static int conntrack4_net_id __read_mostly;
static DEFINE_MUTEX(register_ipv4_hooks);
struct conntrack4_net {
unsigned int users;
};
static bool ipv4_pkt_to_tuple(const struct sk_buff *skb, unsigned int nhoff, static bool ipv4_pkt_to_tuple(const struct sk_buff *skb, unsigned int nhoff,
struct nf_conntrack_tuple *tuple) struct nf_conntrack_tuple *tuple)
{ {
...@@ -307,9 +314,42 @@ static struct nf_sockopt_ops so_getorigdst = { ...@@ -307,9 +314,42 @@ static struct nf_sockopt_ops so_getorigdst = {
.owner = THIS_MODULE, .owner = THIS_MODULE,
}; };
static int ipv4_init_net(struct net *net) static int ipv4_hooks_register(struct net *net)
{ {
return 0; struct conntrack4_net *cnet = net_generic(net, conntrack4_net_id);
int err = 0;
mutex_lock(&register_ipv4_hooks);
cnet->users++;
if (cnet->users > 1)
goto out_unlock;
err = nf_defrag_ipv4_enable(net);
if (err) {
cnet->users = 0;
goto out_unlock;
}
err = nf_register_net_hooks(net, ipv4_conntrack_ops,
ARRAY_SIZE(ipv4_conntrack_ops));
if (err)
cnet->users = 0;
out_unlock:
mutex_unlock(&register_ipv4_hooks);
return err;
}
static void ipv4_hooks_unregister(struct net *net)
{
struct conntrack4_net *cnet = net_generic(net, conntrack4_net_id);
mutex_lock(&register_ipv4_hooks);
if (cnet->users && (--cnet->users == 0))
nf_unregister_net_hooks(net, ipv4_conntrack_ops,
ARRAY_SIZE(ipv4_conntrack_ops));
mutex_unlock(&register_ipv4_hooks);
} }
struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = { struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = {
...@@ -325,7 +365,8 @@ struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = { ...@@ -325,7 +365,8 @@ struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv4 __read_mostly = {
.nlattr_to_tuple = ipv4_nlattr_to_tuple, .nlattr_to_tuple = ipv4_nlattr_to_tuple,
.nla_policy = ipv4_nla_policy, .nla_policy = ipv4_nla_policy,
#endif #endif
.init_net = ipv4_init_net, .net_ns_get = ipv4_hooks_register,
.net_ns_put = ipv4_hooks_unregister,
.me = THIS_MODULE, .me = THIS_MODULE,
}; };
...@@ -340,6 +381,15 @@ static struct nf_conntrack_l4proto *builtin_l4proto4[] = { ...@@ -340,6 +381,15 @@ static struct nf_conntrack_l4proto *builtin_l4proto4[] = {
&nf_conntrack_l4proto_tcp4, &nf_conntrack_l4proto_tcp4,
&nf_conntrack_l4proto_udp4, &nf_conntrack_l4proto_udp4,
&nf_conntrack_l4proto_icmp, &nf_conntrack_l4proto_icmp,
#ifdef CONFIG_NF_CT_PROTO_DCCP
&nf_conntrack_l4proto_dccp4,
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
&nf_conntrack_l4proto_sctp4,
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
&nf_conntrack_l4proto_udplite4,
#endif
}; };
static int ipv4_net_init(struct net *net) static int ipv4_net_init(struct net *net)
...@@ -369,6 +419,8 @@ static void ipv4_net_exit(struct net *net) ...@@ -369,6 +419,8 @@ static void ipv4_net_exit(struct net *net)
static struct pernet_operations ipv4_net_ops = { static struct pernet_operations ipv4_net_ops = {
.init = ipv4_net_init, .init = ipv4_net_init,
.exit = ipv4_net_exit, .exit = ipv4_net_exit,
.id = &conntrack4_net_id,
.size = sizeof(struct conntrack4_net),
}; };
static int __init nf_conntrack_l3proto_ipv4_init(void) static int __init nf_conntrack_l3proto_ipv4_init(void)
...@@ -376,7 +428,6 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) ...@@ -376,7 +428,6 @@ static int __init nf_conntrack_l3proto_ipv4_init(void)
int ret = 0; int ret = 0;
need_conntrack(); need_conntrack();
nf_defrag_ipv4_enable();
ret = nf_register_sockopt(&so_getorigdst); ret = nf_register_sockopt(&so_getorigdst);
if (ret < 0) { if (ret < 0) {
...@@ -390,17 +441,10 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) ...@@ -390,17 +441,10 @@ static int __init nf_conntrack_l3proto_ipv4_init(void)
goto cleanup_sockopt; goto cleanup_sockopt;
} }
ret = nf_register_hooks(ipv4_conntrack_ops,
ARRAY_SIZE(ipv4_conntrack_ops));
if (ret < 0) {
pr_err("nf_conntrack_ipv4: can't register hooks.\n");
goto cleanup_pernet;
}
ret = nf_ct_l4proto_register(builtin_l4proto4, ret = nf_ct_l4proto_register(builtin_l4proto4,
ARRAY_SIZE(builtin_l4proto4)); ARRAY_SIZE(builtin_l4proto4));
if (ret < 0) if (ret < 0)
goto cleanup_hooks; goto cleanup_pernet;
ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv4); ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv4);
if (ret < 0) { if (ret < 0) {
...@@ -412,8 +456,6 @@ static int __init nf_conntrack_l3proto_ipv4_init(void) ...@@ -412,8 +456,6 @@ static int __init nf_conntrack_l3proto_ipv4_init(void)
cleanup_l4proto: cleanup_l4proto:
nf_ct_l4proto_unregister(builtin_l4proto4, nf_ct_l4proto_unregister(builtin_l4proto4,
ARRAY_SIZE(builtin_l4proto4)); ARRAY_SIZE(builtin_l4proto4));
cleanup_hooks:
nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops));
cleanup_pernet: cleanup_pernet:
unregister_pernet_subsys(&ipv4_net_ops); unregister_pernet_subsys(&ipv4_net_ops);
cleanup_sockopt: cleanup_sockopt:
...@@ -427,7 +469,6 @@ static void __exit nf_conntrack_l3proto_ipv4_fini(void) ...@@ -427,7 +469,6 @@ static void __exit nf_conntrack_l3proto_ipv4_fini(void)
nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv4); nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv4);
nf_ct_l4proto_unregister(builtin_l4proto4, nf_ct_l4proto_unregister(builtin_l4proto4,
ARRAY_SIZE(builtin_l4proto4)); ARRAY_SIZE(builtin_l4proto4));
nf_unregister_hooks(ipv4_conntrack_ops, ARRAY_SIZE(ipv4_conntrack_ops));
unregister_pernet_subsys(&ipv4_net_ops); unregister_pernet_subsys(&ipv4_net_ops);
nf_unregister_sockopt(&so_getorigdst); nf_unregister_sockopt(&so_getorigdst);
} }
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
#include <linux/netfilter.h> #include <linux/netfilter.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <net/netns/generic.h>
#include <net/route.h> #include <net/route.h>
#include <net/ip.h> #include <net/ip.h>
...@@ -22,6 +23,8 @@ ...@@ -22,6 +23,8 @@
#endif #endif
#include <net/netfilter/nf_conntrack_zones.h> #include <net/netfilter/nf_conntrack_zones.h>
static DEFINE_MUTEX(defrag4_mutex);
static int nf_ct_ipv4_gather_frags(struct net *net, struct sk_buff *skb, static int nf_ct_ipv4_gather_frags(struct net *net, struct sk_buff *skb,
u_int32_t user) u_int32_t user)
{ {
...@@ -102,18 +105,50 @@ static struct nf_hook_ops ipv4_defrag_ops[] = { ...@@ -102,18 +105,50 @@ static struct nf_hook_ops ipv4_defrag_ops[] = {
}, },
}; };
static void __net_exit defrag4_net_exit(struct net *net)
{
if (net->nf.defrag_ipv4) {
nf_unregister_net_hooks(net, ipv4_defrag_ops,
ARRAY_SIZE(ipv4_defrag_ops));
net->nf.defrag_ipv4 = false;
}
}
static struct pernet_operations defrag4_net_ops = {
.exit = defrag4_net_exit,
};
static int __init nf_defrag_init(void) static int __init nf_defrag_init(void)
{ {
return nf_register_hooks(ipv4_defrag_ops, ARRAY_SIZE(ipv4_defrag_ops)); return register_pernet_subsys(&defrag4_net_ops);
} }
static void __exit nf_defrag_fini(void) static void __exit nf_defrag_fini(void)
{ {
nf_unregister_hooks(ipv4_defrag_ops, ARRAY_SIZE(ipv4_defrag_ops)); unregister_pernet_subsys(&defrag4_net_ops);
} }
void nf_defrag_ipv4_enable(void) int nf_defrag_ipv4_enable(struct net *net)
{ {
int err = 0;
might_sleep();
if (net->nf.defrag_ipv4)
return 0;
mutex_lock(&defrag4_mutex);
if (net->nf.defrag_ipv4)
goto out_unlock;
err = nf_register_net_hooks(net, ipv4_defrag_ops,
ARRAY_SIZE(ipv4_defrag_ops));
if (err == 0)
net->nf.defrag_ipv4 = true;
out_unlock:
mutex_unlock(&defrag4_mutex);
return err;
} }
EXPORT_SYMBOL_GPL(nf_defrag_ipv4_enable); EXPORT_SYMBOL_GPL(nf_defrag_ipv4_enable);
......
...@@ -101,12 +101,13 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -101,12 +101,13 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
} }
iph = ip_hdr(pkt->skb); iph = ip_hdr(pkt->skb);
if (ipv4_is_multicast(iph->daddr) && if (ipv4_is_zeronet(iph->saddr)) {
ipv4_is_zeronet(iph->saddr) && if (ipv4_is_lbcast(iph->daddr) ||
ipv4_is_local_multicast(iph->daddr)) { ipv4_is_local_multicast(iph->daddr)) {
nft_fib_store_result(dest, priv->result, pkt, nft_fib_store_result(dest, priv->result, pkt,
get_ifindex(pkt->skb->dev)); get_ifindex(pkt->skb->dev));
return; return;
}
} }
if (priv->flags & NFTA_FIB_F_MARK) if (priv->flags & NFTA_FIB_F_MARK)
...@@ -122,6 +123,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs, ...@@ -122,6 +123,8 @@ void nft_fib4_eval(const struct nft_expr *expr, struct nft_regs *regs,
fl4.saddr = get_saddr(iph->daddr); fl4.saddr = get_saddr(iph->daddr);
} }
*dest = 0;
if (fib_lookup(nft_net(pkt), &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE)) if (fib_lookup(nft_net(pkt), &fl4, &res, FIB_LOOKUP_IGNORE_LINKSTATE))
return; return;
...@@ -198,7 +201,7 @@ nft_fib4_select_ops(const struct nft_ctx *ctx, ...@@ -198,7 +201,7 @@ nft_fib4_select_ops(const struct nft_ctx *ctx,
if (!tb[NFTA_FIB_RESULT]) if (!tb[NFTA_FIB_RESULT])
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
result = htonl(nla_get_be32(tb[NFTA_FIB_RESULT])); result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT]));
switch (result) { switch (result) {
case NFT_FIB_RESULT_OIF: case NFT_FIB_RESULT_OIF:
......
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -35,12 +35,19 @@ static void nft_masq_ipv4_eval(const struct nft_expr *expr, ...@@ -35,12 +35,19 @@ static void nft_masq_ipv4_eval(const struct nft_expr *expr,
&range, nft_out(pkt)); &range, nft_out(pkt));
} }
static void
nft_masq_ipv4_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
nf_ct_netns_put(ctx->net, NFPROTO_IPV4);
}
static struct nft_expr_type nft_masq_ipv4_type; static struct nft_expr_type nft_masq_ipv4_type;
static const struct nft_expr_ops nft_masq_ipv4_ops = { static const struct nft_expr_ops nft_masq_ipv4_ops = {
.type = &nft_masq_ipv4_type, .type = &nft_masq_ipv4_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_masq)), .size = NFT_EXPR_SIZE(sizeof(struct nft_masq)),
.eval = nft_masq_ipv4_eval, .eval = nft_masq_ipv4_eval,
.init = nft_masq_init, .init = nft_masq_init,
.destroy = nft_masq_ipv4_destroy,
.dump = nft_masq_dump, .dump = nft_masq_dump,
.validate = nft_masq_validate, .validate = nft_masq_validate,
}; };
...@@ -77,5 +84,5 @@ module_init(nft_masq_ipv4_module_init); ...@@ -77,5 +84,5 @@ module_init(nft_masq_ipv4_module_init);
module_exit(nft_masq_ipv4_module_exit); module_exit(nft_masq_ipv4_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org");
MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "masq"); MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "masq");
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -38,12 +38,19 @@ static void nft_redir_ipv4_eval(const struct nft_expr *expr, ...@@ -38,12 +38,19 @@ static void nft_redir_ipv4_eval(const struct nft_expr *expr,
regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, nft_hook(pkt)); regs->verdict.code = nf_nat_redirect_ipv4(pkt->skb, &mr, nft_hook(pkt));
} }
static void
nft_redir_ipv4_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
nf_ct_netns_put(ctx->net, NFPROTO_IPV4);
}
static struct nft_expr_type nft_redir_ipv4_type; static struct nft_expr_type nft_redir_ipv4_type;
static const struct nft_expr_ops nft_redir_ipv4_ops = { static const struct nft_expr_ops nft_redir_ipv4_ops = {
.type = &nft_redir_ipv4_type, .type = &nft_redir_ipv4_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_redir)), .size = NFT_EXPR_SIZE(sizeof(struct nft_redir)),
.eval = nft_redir_ipv4_eval, .eval = nft_redir_ipv4_eval,
.init = nft_redir_init, .init = nft_redir_init,
.destroy = nft_redir_ipv4_destroy,
.dump = nft_redir_dump, .dump = nft_redir_dump,
.validate = nft_redir_validate, .validate = nft_redir_validate,
}; };
...@@ -71,5 +78,5 @@ module_init(nft_redir_ipv4_module_init); ...@@ -71,5 +78,5 @@ module_init(nft_redir_ipv4_module_init);
module_exit(nft_redir_ipv4_module_exit); module_exit(nft_redir_ipv4_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>");
MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "redir"); MODULE_ALIAS_NFT_AF_EXPR(AF_INET, "redir");
...@@ -562,7 +562,8 @@ static int check_target(struct ip6t_entry *e, struct net *net, const char *name) ...@@ -562,7 +562,8 @@ static int check_target(struct ip6t_entry *e, struct net *net, const char *name)
static int static int
find_check_entry(struct ip6t_entry *e, struct net *net, const char *name, find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
unsigned int size) unsigned int size,
struct xt_percpu_counter_alloc_state *alloc_state)
{ {
struct xt_entry_target *t; struct xt_entry_target *t;
struct xt_target *target; struct xt_target *target;
...@@ -570,12 +571,9 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name, ...@@ -570,12 +571,9 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
unsigned int j; unsigned int j;
struct xt_mtchk_param mtpar; struct xt_mtchk_param mtpar;
struct xt_entry_match *ematch; struct xt_entry_match *ematch;
unsigned long pcnt;
pcnt = xt_percpu_counter_alloc(); if (!xt_percpu_counter_alloc(alloc_state, &e->counters))
if (IS_ERR_VALUE(pcnt))
return -ENOMEM; return -ENOMEM;
e->counters.pcnt = pcnt;
j = 0; j = 0;
mtpar.net = net; mtpar.net = net;
...@@ -612,7 +610,7 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name, ...@@ -612,7 +610,7 @@ find_check_entry(struct ip6t_entry *e, struct net *net, const char *name,
cleanup_match(ematch, net); cleanup_match(ematch, net);
} }
xt_percpu_counter_free(e->counters.pcnt); xt_percpu_counter_free(&e->counters);
return ret; return ret;
} }
...@@ -699,8 +697,7 @@ static void cleanup_entry(struct ip6t_entry *e, struct net *net) ...@@ -699,8 +697,7 @@ static void cleanup_entry(struct ip6t_entry *e, struct net *net)
if (par.target->destroy != NULL) if (par.target->destroy != NULL)
par.target->destroy(&par); par.target->destroy(&par);
module_put(par.target->me); module_put(par.target->me);
xt_percpu_counter_free(&e->counters);
xt_percpu_counter_free(e->counters.pcnt);
} }
/* Checks and translates the user-supplied table segment (held in /* Checks and translates the user-supplied table segment (held in
...@@ -709,6 +706,7 @@ static int ...@@ -709,6 +706,7 @@ static int
translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0, translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
const struct ip6t_replace *repl) const struct ip6t_replace *repl)
{ {
struct xt_percpu_counter_alloc_state alloc_state = { 0 };
struct ip6t_entry *iter; struct ip6t_entry *iter;
unsigned int *offsets; unsigned int *offsets;
unsigned int i; unsigned int i;
...@@ -768,7 +766,8 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0, ...@@ -768,7 +766,8 @@ translate_table(struct net *net, struct xt_table_info *newinfo, void *entry0,
/* Finally, each sanity check must pass */ /* Finally, each sanity check must pass */
i = 0; i = 0;
xt_entry_foreach(iter, entry0, newinfo->size) { xt_entry_foreach(iter, entry0, newinfo->size) {
ret = find_check_entry(iter, net, repl->name, repl->size); ret = find_check_entry(iter, net, repl->name, repl->size,
&alloc_state);
if (ret != 0) if (ret != 0)
break; break;
++i; ++i;
......
...@@ -440,12 +440,12 @@ static int synproxy_tg6_check(const struct xt_tgchk_param *par) ...@@ -440,12 +440,12 @@ static int synproxy_tg6_check(const struct xt_tgchk_param *par)
e->ipv6.invflags & XT_INV_PROTO) e->ipv6.invflags & XT_INV_PROTO)
return -EINVAL; return -EINVAL;
return nf_ct_l3proto_try_module_get(par->family); return nf_ct_netns_get(par->net, par->family);
} }
static void synproxy_tg6_destroy(const struct xt_tgdtor_param *par) static void synproxy_tg6_destroy(const struct xt_tgdtor_param *par)
{ {
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
} }
static struct xt_target synproxy_tg6_reg __read_mostly = { static struct xt_target synproxy_tg6_reg __read_mostly = {
......
...@@ -34,6 +34,13 @@ ...@@ -34,6 +34,13 @@
#include <net/netfilter/ipv6/nf_defrag_ipv6.h> #include <net/netfilter/ipv6/nf_defrag_ipv6.h>
#include <net/netfilter/nf_log.h> #include <net/netfilter/nf_log.h>
static int conntrack6_net_id;
static DEFINE_MUTEX(register_ipv6_hooks);
struct conntrack6_net {
unsigned int users;
};
static bool ipv6_pkt_to_tuple(const struct sk_buff *skb, unsigned int nhoff, static bool ipv6_pkt_to_tuple(const struct sk_buff *skb, unsigned int nhoff,
struct nf_conntrack_tuple *tuple) struct nf_conntrack_tuple *tuple)
{ {
...@@ -308,6 +315,42 @@ static int ipv6_nlattr_tuple_size(void) ...@@ -308,6 +315,42 @@ static int ipv6_nlattr_tuple_size(void)
} }
#endif #endif
static int ipv6_hooks_register(struct net *net)
{
struct conntrack6_net *cnet = net_generic(net, conntrack6_net_id);
int err = 0;
mutex_lock(&register_ipv6_hooks);
cnet->users++;
if (cnet->users > 1)
goto out_unlock;
err = nf_defrag_ipv6_enable(net);
if (err < 0) {
cnet->users = 0;
goto out_unlock;
}
err = nf_register_net_hooks(net, ipv6_conntrack_ops,
ARRAY_SIZE(ipv6_conntrack_ops));
if (err)
cnet->users = 0;
out_unlock:
mutex_unlock(&register_ipv6_hooks);
return err;
}
static void ipv6_hooks_unregister(struct net *net)
{
struct conntrack6_net *cnet = net_generic(net, conntrack6_net_id);
mutex_lock(&register_ipv6_hooks);
if (cnet->users && (--cnet->users == 0))
nf_unregister_net_hooks(net, ipv6_conntrack_ops,
ARRAY_SIZE(ipv6_conntrack_ops));
mutex_unlock(&register_ipv6_hooks);
}
struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 __read_mostly = { struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 __read_mostly = {
.l3proto = PF_INET6, .l3proto = PF_INET6,
.name = "ipv6", .name = "ipv6",
...@@ -321,6 +364,8 @@ struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 __read_mostly = { ...@@ -321,6 +364,8 @@ struct nf_conntrack_l3proto nf_conntrack_l3proto_ipv6 __read_mostly = {
.nlattr_to_tuple = ipv6_nlattr_to_tuple, .nlattr_to_tuple = ipv6_nlattr_to_tuple,
.nla_policy = ipv6_nla_policy, .nla_policy = ipv6_nla_policy,
#endif #endif
.net_ns_get = ipv6_hooks_register,
.net_ns_put = ipv6_hooks_unregister,
.me = THIS_MODULE, .me = THIS_MODULE,
}; };
...@@ -340,6 +385,15 @@ static struct nf_conntrack_l4proto *builtin_l4proto6[] = { ...@@ -340,6 +385,15 @@ static struct nf_conntrack_l4proto *builtin_l4proto6[] = {
&nf_conntrack_l4proto_tcp6, &nf_conntrack_l4proto_tcp6,
&nf_conntrack_l4proto_udp6, &nf_conntrack_l4proto_udp6,
&nf_conntrack_l4proto_icmpv6, &nf_conntrack_l4proto_icmpv6,
#ifdef CONFIG_NF_CT_PROTO_DCCP
&nf_conntrack_l4proto_dccp6,
#endif
#ifdef CONFIG_NF_CT_PROTO_SCTP
&nf_conntrack_l4proto_sctp6,
#endif
#ifdef CONFIG_NF_CT_PROTO_UDPLITE
&nf_conntrack_l4proto_udplite6,
#endif
}; };
static int ipv6_net_init(struct net *net) static int ipv6_net_init(struct net *net)
...@@ -370,6 +424,8 @@ static void ipv6_net_exit(struct net *net) ...@@ -370,6 +424,8 @@ static void ipv6_net_exit(struct net *net)
static struct pernet_operations ipv6_net_ops = { static struct pernet_operations ipv6_net_ops = {
.init = ipv6_net_init, .init = ipv6_net_init,
.exit = ipv6_net_exit, .exit = ipv6_net_exit,
.id = &conntrack6_net_id,
.size = sizeof(struct conntrack6_net),
}; };
static int __init nf_conntrack_l3proto_ipv6_init(void) static int __init nf_conntrack_l3proto_ipv6_init(void)
...@@ -377,7 +433,6 @@ static int __init nf_conntrack_l3proto_ipv6_init(void) ...@@ -377,7 +433,6 @@ static int __init nf_conntrack_l3proto_ipv6_init(void)
int ret = 0; int ret = 0;
need_conntrack(); need_conntrack();
nf_defrag_ipv6_enable();
ret = nf_register_sockopt(&so_getorigdst6); ret = nf_register_sockopt(&so_getorigdst6);
if (ret < 0) { if (ret < 0) {
...@@ -389,18 +444,10 @@ static int __init nf_conntrack_l3proto_ipv6_init(void) ...@@ -389,18 +444,10 @@ static int __init nf_conntrack_l3proto_ipv6_init(void)
if (ret < 0) if (ret < 0)
goto cleanup_sockopt; goto cleanup_sockopt;
ret = nf_register_hooks(ipv6_conntrack_ops,
ARRAY_SIZE(ipv6_conntrack_ops));
if (ret < 0) {
pr_err("nf_conntrack_ipv6: can't register pre-routing defrag "
"hook.\n");
goto cleanup_pernet;
}
ret = nf_ct_l4proto_register(builtin_l4proto6, ret = nf_ct_l4proto_register(builtin_l4proto6,
ARRAY_SIZE(builtin_l4proto6)); ARRAY_SIZE(builtin_l4proto6));
if (ret < 0) if (ret < 0)
goto cleanup_hooks; goto cleanup_pernet;
ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv6); ret = nf_ct_l3proto_register(&nf_conntrack_l3proto_ipv6);
if (ret < 0) { if (ret < 0) {
...@@ -411,8 +458,6 @@ static int __init nf_conntrack_l3proto_ipv6_init(void) ...@@ -411,8 +458,6 @@ static int __init nf_conntrack_l3proto_ipv6_init(void)
cleanup_l4proto: cleanup_l4proto:
nf_ct_l4proto_unregister(builtin_l4proto6, nf_ct_l4proto_unregister(builtin_l4proto6,
ARRAY_SIZE(builtin_l4proto6)); ARRAY_SIZE(builtin_l4proto6));
cleanup_hooks:
nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops));
cleanup_pernet: cleanup_pernet:
unregister_pernet_subsys(&ipv6_net_ops); unregister_pernet_subsys(&ipv6_net_ops);
cleanup_sockopt: cleanup_sockopt:
...@@ -426,7 +471,6 @@ static void __exit nf_conntrack_l3proto_ipv6_fini(void) ...@@ -426,7 +471,6 @@ static void __exit nf_conntrack_l3proto_ipv6_fini(void)
nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv6); nf_ct_l3proto_unregister(&nf_conntrack_l3proto_ipv6);
nf_ct_l4proto_unregister(builtin_l4proto6, nf_ct_l4proto_unregister(builtin_l4proto6,
ARRAY_SIZE(builtin_l4proto6)); ARRAY_SIZE(builtin_l4proto6));
nf_unregister_hooks(ipv6_conntrack_ops, ARRAY_SIZE(ipv6_conntrack_ops));
unregister_pernet_subsys(&ipv6_net_ops); unregister_pernet_subsys(&ipv6_net_ops);
nf_unregister_sockopt(&so_getorigdst6); nf_unregister_sockopt(&so_getorigdst6);
} }
......
...@@ -30,6 +30,8 @@ ...@@ -30,6 +30,8 @@
#include <net/netfilter/nf_conntrack_zones.h> #include <net/netfilter/nf_conntrack_zones.h>
#include <net/netfilter/ipv6/nf_defrag_ipv6.h> #include <net/netfilter/ipv6/nf_defrag_ipv6.h>
static DEFINE_MUTEX(defrag6_mutex);
static enum ip6_defrag_users nf_ct6_defrag_user(unsigned int hooknum, static enum ip6_defrag_users nf_ct6_defrag_user(unsigned int hooknum,
struct sk_buff *skb) struct sk_buff *skb)
{ {
...@@ -87,6 +89,19 @@ static struct nf_hook_ops ipv6_defrag_ops[] = { ...@@ -87,6 +89,19 @@ static struct nf_hook_ops ipv6_defrag_ops[] = {
}, },
}; };
static void __net_exit defrag6_net_exit(struct net *net)
{
if (net->nf.defrag_ipv6) {
nf_unregister_net_hooks(net, ipv6_defrag_ops,
ARRAY_SIZE(ipv6_defrag_ops));
net->nf.defrag_ipv6 = false;
}
}
static struct pernet_operations defrag6_net_ops = {
.exit = defrag6_net_exit,
};
static int __init nf_defrag_init(void) static int __init nf_defrag_init(void)
{ {
int ret = 0; int ret = 0;
...@@ -96,9 +111,9 @@ static int __init nf_defrag_init(void) ...@@ -96,9 +111,9 @@ static int __init nf_defrag_init(void)
pr_err("nf_defrag_ipv6: can't initialize frag6.\n"); pr_err("nf_defrag_ipv6: can't initialize frag6.\n");
return ret; return ret;
} }
ret = nf_register_hooks(ipv6_defrag_ops, ARRAY_SIZE(ipv6_defrag_ops)); ret = register_pernet_subsys(&defrag6_net_ops);
if (ret < 0) { if (ret < 0) {
pr_err("nf_defrag_ipv6: can't register hooks\n"); pr_err("nf_defrag_ipv6: can't register pernet ops\n");
goto cleanup_frag6; goto cleanup_frag6;
} }
return ret; return ret;
...@@ -111,12 +126,31 @@ static int __init nf_defrag_init(void) ...@@ -111,12 +126,31 @@ static int __init nf_defrag_init(void)
static void __exit nf_defrag_fini(void) static void __exit nf_defrag_fini(void)
{ {
nf_unregister_hooks(ipv6_defrag_ops, ARRAY_SIZE(ipv6_defrag_ops)); unregister_pernet_subsys(&defrag6_net_ops);
nf_ct_frag6_cleanup(); nf_ct_frag6_cleanup();
} }
void nf_defrag_ipv6_enable(void) int nf_defrag_ipv6_enable(struct net *net)
{ {
int err = 0;
might_sleep();
if (net->nf.defrag_ipv6)
return 0;
mutex_lock(&defrag6_mutex);
if (net->nf.defrag_ipv6)
goto out_unlock;
err = nf_register_net_hooks(net, ipv6_defrag_ops,
ARRAY_SIZE(ipv6_defrag_ops));
if (err == 0)
net->nf.defrag_ipv6 = true;
out_unlock:
mutex_unlock(&defrag6_mutex);
return err;
} }
EXPORT_SYMBOL_GPL(nf_defrag_ipv6_enable); EXPORT_SYMBOL_GPL(nf_defrag_ipv6_enable);
......
...@@ -235,7 +235,7 @@ nft_fib6_select_ops(const struct nft_ctx *ctx, ...@@ -235,7 +235,7 @@ nft_fib6_select_ops(const struct nft_ctx *ctx,
if (!tb[NFTA_FIB_RESULT]) if (!tb[NFTA_FIB_RESULT])
return ERR_PTR(-EINVAL); return ERR_PTR(-EINVAL);
result = htonl(nla_get_be32(tb[NFTA_FIB_RESULT])); result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT]));
switch (result) { switch (result) {
case NFT_FIB_RESULT_OIF: case NFT_FIB_RESULT_OIF:
......
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -36,12 +36,19 @@ static void nft_masq_ipv6_eval(const struct nft_expr *expr, ...@@ -36,12 +36,19 @@ static void nft_masq_ipv6_eval(const struct nft_expr *expr,
nft_out(pkt)); nft_out(pkt));
} }
static void
nft_masq_ipv6_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
nf_ct_netns_put(ctx->net, NFPROTO_IPV6);
}
static struct nft_expr_type nft_masq_ipv6_type; static struct nft_expr_type nft_masq_ipv6_type;
static const struct nft_expr_ops nft_masq_ipv6_ops = { static const struct nft_expr_ops nft_masq_ipv6_ops = {
.type = &nft_masq_ipv6_type, .type = &nft_masq_ipv6_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_masq)), .size = NFT_EXPR_SIZE(sizeof(struct nft_masq)),
.eval = nft_masq_ipv6_eval, .eval = nft_masq_ipv6_eval,
.init = nft_masq_init, .init = nft_masq_init,
.destroy = nft_masq_ipv6_destroy,
.dump = nft_masq_dump, .dump = nft_masq_dump,
.validate = nft_masq_validate, .validate = nft_masq_validate,
}; };
...@@ -78,5 +85,5 @@ module_init(nft_masq_ipv6_module_init); ...@@ -78,5 +85,5 @@ module_init(nft_masq_ipv6_module_init);
module_exit(nft_masq_ipv6_module_exit); module_exit(nft_masq_ipv6_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>");
MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "masq"); MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "masq");
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -39,12 +39,19 @@ static void nft_redir_ipv6_eval(const struct nft_expr *expr, ...@@ -39,12 +39,19 @@ static void nft_redir_ipv6_eval(const struct nft_expr *expr,
nf_nat_redirect_ipv6(pkt->skb, &range, nft_hook(pkt)); nf_nat_redirect_ipv6(pkt->skb, &range, nft_hook(pkt));
} }
static void
nft_redir_ipv6_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
nf_ct_netns_put(ctx->net, NFPROTO_IPV6);
}
static struct nft_expr_type nft_redir_ipv6_type; static struct nft_expr_type nft_redir_ipv6_type;
static const struct nft_expr_ops nft_redir_ipv6_ops = { static const struct nft_expr_ops nft_redir_ipv6_ops = {
.type = &nft_redir_ipv6_type, .type = &nft_redir_ipv6_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_redir)), .size = NFT_EXPR_SIZE(sizeof(struct nft_redir)),
.eval = nft_redir_ipv6_eval, .eval = nft_redir_ipv6_eval,
.init = nft_redir_init, .init = nft_redir_init,
.destroy = nft_redir_ipv6_destroy,
.dump = nft_redir_dump, .dump = nft_redir_dump,
.validate = nft_redir_validate, .validate = nft_redir_validate,
}; };
...@@ -72,5 +79,5 @@ module_init(nft_redir_ipv6_module_init); ...@@ -72,5 +79,5 @@ module_init(nft_redir_ipv6_module_init);
module_exit(nft_redir_ipv6_module_exit); module_exit(nft_redir_ipv6_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>");
MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "redir"); MODULE_ALIAS_NFT_AF_EXPR(AF_INET6, "redir");
...@@ -146,38 +146,38 @@ config NF_CONNTRACK_LABELS ...@@ -146,38 +146,38 @@ config NF_CONNTRACK_LABELS
to connection tracking entries. It selected by the connlabel match. to connection tracking entries. It selected by the connlabel match.
config NF_CT_PROTO_DCCP config NF_CT_PROTO_DCCP
tristate 'DCCP protocol connection tracking support' bool 'DCCP protocol connection tracking support'
depends on NETFILTER_ADVANCED depends on NETFILTER_ADVANCED
default IP_DCCP default y
help help
With this option enabled, the layer 3 independent connection With this option enabled, the layer 3 independent connection
tracking code will be able to do state tracking on DCCP connections. tracking code will be able to do state tracking on DCCP connections.
If unsure, say 'N'. If unsure, say Y.
config NF_CT_PROTO_GRE config NF_CT_PROTO_GRE
tristate tristate
config NF_CT_PROTO_SCTP config NF_CT_PROTO_SCTP
tristate 'SCTP protocol connection tracking support' bool 'SCTP protocol connection tracking support'
depends on NETFILTER_ADVANCED depends on NETFILTER_ADVANCED
default IP_SCTP default y
help help
With this option enabled, the layer 3 independent connection With this option enabled, the layer 3 independent connection
tracking code will be able to do state tracking on SCTP connections. tracking code will be able to do state tracking on SCTP connections.
If you want to compile it as a module, say M here and read If unsure, say Y.
<file:Documentation/kbuild/modules.txt>. If unsure, say `N'.
config NF_CT_PROTO_UDPLITE config NF_CT_PROTO_UDPLITE
tristate 'UDP-Lite protocol connection tracking support' bool 'UDP-Lite protocol connection tracking support'
depends on NETFILTER_ADVANCED depends on NETFILTER_ADVANCED
default y
help help
With this option enabled, the layer 3 independent connection With this option enabled, the layer 3 independent connection
tracking code will be able to do state tracking on UDP-Lite tracking code will be able to do state tracking on UDP-Lite
connections. connections.
To compile it as a module, choose M here. If unsure, say N. If unsure, say Y.
config NF_CONNTRACK_AMANDA config NF_CONNTRACK_AMANDA
tristate "Amanda backup protocol support" tristate "Amanda backup protocol support"
...@@ -384,17 +384,17 @@ config NF_NAT_NEEDED ...@@ -384,17 +384,17 @@ config NF_NAT_NEEDED
default y default y
config NF_NAT_PROTO_DCCP config NF_NAT_PROTO_DCCP
tristate bool
depends on NF_NAT && NF_CT_PROTO_DCCP depends on NF_NAT && NF_CT_PROTO_DCCP
default NF_NAT && NF_CT_PROTO_DCCP default NF_NAT && NF_CT_PROTO_DCCP
config NF_NAT_PROTO_UDPLITE config NF_NAT_PROTO_UDPLITE
tristate bool
depends on NF_NAT && NF_CT_PROTO_UDPLITE depends on NF_NAT && NF_CT_PROTO_UDPLITE
default NF_NAT && NF_CT_PROTO_UDPLITE default NF_NAT && NF_CT_PROTO_UDPLITE
config NF_NAT_PROTO_SCTP config NF_NAT_PROTO_SCTP
tristate bool
default NF_NAT && NF_CT_PROTO_SCTP default NF_NAT && NF_CT_PROTO_SCTP
depends on NF_NAT && NF_CT_PROTO_SCTP depends on NF_NAT && NF_CT_PROTO_SCTP
select LIBCRC32C select LIBCRC32C
...@@ -551,6 +551,12 @@ config NFT_NAT ...@@ -551,6 +551,12 @@ config NFT_NAT
This option adds the "nat" expression that you can use to perform This option adds the "nat" expression that you can use to perform
typical Network Address Translation (NAT) packet transformations. typical Network Address Translation (NAT) packet transformations.
config NFT_OBJREF
tristate "Netfilter nf_tables stateful object reference module"
help
This option adds the "objref" expression that allows you to refer to
stateful objects, such as counters and quotas.
config NFT_QUEUE config NFT_QUEUE
depends on NETFILTER_NETLINK_QUEUE depends on NETFILTER_NETLINK_QUEUE
tristate "Netfilter nf_tables queue module" tristate "Netfilter nf_tables queue module"
......
...@@ -5,6 +5,9 @@ nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMEOUT) += nf_conntrack_timeout.o ...@@ -5,6 +5,9 @@ nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMEOUT) += nf_conntrack_timeout.o
nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMESTAMP) += nf_conntrack_timestamp.o nf_conntrack-$(CONFIG_NF_CONNTRACK_TIMESTAMP) += nf_conntrack_timestamp.o
nf_conntrack-$(CONFIG_NF_CONNTRACK_EVENTS) += nf_conntrack_ecache.o nf_conntrack-$(CONFIG_NF_CONNTRACK_EVENTS) += nf_conntrack_ecache.o
nf_conntrack-$(CONFIG_NF_CONNTRACK_LABELS) += nf_conntrack_labels.o nf_conntrack-$(CONFIG_NF_CONNTRACK_LABELS) += nf_conntrack_labels.o
nf_conntrack-$(CONFIG_NF_CT_PROTO_DCCP) += nf_conntrack_proto_dccp.o
nf_conntrack-$(CONFIG_NF_CT_PROTO_SCTP) += nf_conntrack_proto_sctp.o
nf_conntrack-$(CONFIG_NF_CT_PROTO_UDPLITE) += nf_conntrack_proto_udplite.o
obj-$(CONFIG_NETFILTER) = netfilter.o obj-$(CONFIG_NETFILTER) = netfilter.o
...@@ -16,11 +19,7 @@ obj-$(CONFIG_NETFILTER_NETLINK_LOG) += nfnetlink_log.o ...@@ -16,11 +19,7 @@ obj-$(CONFIG_NETFILTER_NETLINK_LOG) += nfnetlink_log.o
# connection tracking # connection tracking
obj-$(CONFIG_NF_CONNTRACK) += nf_conntrack.o obj-$(CONFIG_NF_CONNTRACK) += nf_conntrack.o
# SCTP protocol connection tracking
obj-$(CONFIG_NF_CT_PROTO_DCCP) += nf_conntrack_proto_dccp.o
obj-$(CONFIG_NF_CT_PROTO_GRE) += nf_conntrack_proto_gre.o obj-$(CONFIG_NF_CT_PROTO_GRE) += nf_conntrack_proto_gre.o
obj-$(CONFIG_NF_CT_PROTO_SCTP) += nf_conntrack_proto_sctp.o
obj-$(CONFIG_NF_CT_PROTO_UDPLITE) += nf_conntrack_proto_udplite.o
# netlink interface for nf_conntrack # netlink interface for nf_conntrack
obj-$(CONFIG_NF_CT_NETLINK) += nf_conntrack_netlink.o obj-$(CONFIG_NF_CT_NETLINK) += nf_conntrack_netlink.o
...@@ -45,6 +44,11 @@ obj-$(CONFIG_NF_CONNTRACK_TFTP) += nf_conntrack_tftp.o ...@@ -45,6 +44,11 @@ obj-$(CONFIG_NF_CONNTRACK_TFTP) += nf_conntrack_tftp.o
nf_nat-y := nf_nat_core.o nf_nat_proto_unknown.o nf_nat_proto_common.o \ nf_nat-y := nf_nat_core.o nf_nat_proto_unknown.o nf_nat_proto_common.o \
nf_nat_proto_udp.o nf_nat_proto_tcp.o nf_nat_helper.o nf_nat_proto_udp.o nf_nat_proto_tcp.o nf_nat_helper.o
# NAT protocols (nf_nat)
nf_nat-$(CONFIG_NF_NAT_PROTO_DCCP) += nf_nat_proto_dccp.o
nf_nat-$(CONFIG_NF_NAT_PROTO_SCTP) += nf_nat_proto_sctp.o
nf_nat-$(CONFIG_NF_NAT_PROTO_UDPLITE) += nf_nat_proto_udplite.o
# generic transport layer logging # generic transport layer logging
obj-$(CONFIG_NF_LOG_COMMON) += nf_log_common.o obj-$(CONFIG_NF_LOG_COMMON) += nf_log_common.o
...@@ -54,11 +58,6 @@ obj-$(CONFIG_NF_LOG_NETDEV) += nf_log_netdev.o ...@@ -54,11 +58,6 @@ obj-$(CONFIG_NF_LOG_NETDEV) += nf_log_netdev.o
obj-$(CONFIG_NF_NAT) += nf_nat.o obj-$(CONFIG_NF_NAT) += nf_nat.o
obj-$(CONFIG_NF_NAT_REDIRECT) += nf_nat_redirect.o obj-$(CONFIG_NF_NAT_REDIRECT) += nf_nat_redirect.o
# NAT protocols (nf_nat)
obj-$(CONFIG_NF_NAT_PROTO_DCCP) += nf_nat_proto_dccp.o
obj-$(CONFIG_NF_NAT_PROTO_UDPLITE) += nf_nat_proto_udplite.o
obj-$(CONFIG_NF_NAT_PROTO_SCTP) += nf_nat_proto_sctp.o
# NAT helpers # NAT helpers
obj-$(CONFIG_NF_NAT_AMANDA) += nf_nat_amanda.o obj-$(CONFIG_NF_NAT_AMANDA) += nf_nat_amanda.o
obj-$(CONFIG_NF_NAT_FTP) += nf_nat_ftp.o obj-$(CONFIG_NF_NAT_FTP) += nf_nat_ftp.o
...@@ -89,6 +88,7 @@ obj-$(CONFIG_NFT_NUMGEN) += nft_numgen.o ...@@ -89,6 +88,7 @@ obj-$(CONFIG_NFT_NUMGEN) += nft_numgen.o
obj-$(CONFIG_NFT_CT) += nft_ct.o obj-$(CONFIG_NFT_CT) += nft_ct.o
obj-$(CONFIG_NFT_LIMIT) += nft_limit.o obj-$(CONFIG_NFT_LIMIT) += nft_limit.o
obj-$(CONFIG_NFT_NAT) += nft_nat.o obj-$(CONFIG_NFT_NAT) += nft_nat.o
obj-$(CONFIG_NFT_OBJREF) += nft_objref.o
obj-$(CONFIG_NFT_QUEUE) += nft_queue.o obj-$(CONFIG_NFT_QUEUE) += nft_queue.o
obj-$(CONFIG_NFT_QUOTA) += nft_quota.o obj-$(CONFIG_NFT_QUOTA) += nft_quota.o
obj-$(CONFIG_NFT_REJECT) += nft_reject.o obj-$(CONFIG_NFT_REJECT) += nft_reject.o
......
...@@ -102,17 +102,14 @@ int nf_register_net_hook(struct net *net, const struct nf_hook_ops *reg) ...@@ -102,17 +102,14 @@ int nf_register_net_hook(struct net *net, const struct nf_hook_ops *reg)
if (!entry) if (!entry)
return -ENOMEM; return -ENOMEM;
entry->orig_ops = reg; nf_hook_entry_init(entry, reg);
entry->ops = *reg;
entry->next = NULL;
mutex_lock(&nf_hook_mutex); mutex_lock(&nf_hook_mutex);
/* Find the spot in the list */ /* Find the spot in the list */
while ((p = nf_entry_dereference(*pp)) != NULL) { for (; (p = nf_entry_dereference(*pp)) != NULL; pp = &p->next) {
if (reg->priority < p->orig_ops->priority) if (reg->priority < nf_hook_entry_priority(p))
break; break;
pp = &p->next;
} }
rcu_assign_pointer(entry->next, p); rcu_assign_pointer(entry->next, p);
rcu_assign_pointer(*pp, entry); rcu_assign_pointer(*pp, entry);
...@@ -139,12 +136,11 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg) ...@@ -139,12 +136,11 @@ void nf_unregister_net_hook(struct net *net, const struct nf_hook_ops *reg)
return; return;
mutex_lock(&nf_hook_mutex); mutex_lock(&nf_hook_mutex);
while ((p = nf_entry_dereference(*pp)) != NULL) { for (; (p = nf_entry_dereference(*pp)) != NULL; pp = &p->next) {
if (p->orig_ops == reg) { if (nf_hook_entry_ops(p) == reg) {
rcu_assign_pointer(*pp, p->next); rcu_assign_pointer(*pp, p->next);
break; break;
} }
pp = &p->next;
} }
mutex_unlock(&nf_hook_mutex); mutex_unlock(&nf_hook_mutex);
if (!p) { if (!p) {
...@@ -311,7 +307,7 @@ int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state, ...@@ -311,7 +307,7 @@ int nf_hook_slow(struct sk_buff *skb, struct nf_hook_state *state,
int ret; int ret;
do { do {
verdict = entry->ops.hook(entry->ops.priv, skb, state); verdict = nf_hook_entry_hookfn(entry, skb, state);
switch (verdict & NF_VERDICT_MASK) { switch (verdict & NF_VERDICT_MASK) {
case NF_ACCEPT: case NF_ACCEPT:
entry = rcu_dereference(entry->next); entry = rcu_dereference(entry->next);
......
...@@ -3260,7 +3260,7 @@ static int ip_vs_genl_dump_dests(struct sk_buff *skb, ...@@ -3260,7 +3260,7 @@ static int ip_vs_genl_dump_dests(struct sk_buff *skb,
svc = ip_vs_genl_find_service(ipvs, attrs[IPVS_CMD_ATTR_SERVICE]); svc = ip_vs_genl_find_service(ipvs, attrs[IPVS_CMD_ATTR_SERVICE]);
if (IS_ERR(svc) || svc == NULL) if (IS_ERR_OR_NULL(svc))
goto out_err; goto out_err;
/* Dump the destinations */ /* Dump the destinations */
......
...@@ -254,6 +254,54 @@ static inline bool ensure_mtu_is_adequate(struct netns_ipvs *ipvs, int skb_af, ...@@ -254,6 +254,54 @@ static inline bool ensure_mtu_is_adequate(struct netns_ipvs *ipvs, int skb_af,
return true; return true;
} }
static inline bool decrement_ttl(struct netns_ipvs *ipvs,
int skb_af,
struct sk_buff *skb)
{
struct net *net = ipvs->net;
#ifdef CONFIG_IP_VS_IPV6
if (skb_af == AF_INET6) {
struct dst_entry *dst = skb_dst(skb);
/* check and decrement ttl */
if (ipv6_hdr(skb)->hop_limit <= 1) {
/* Force OUTPUT device used as source address */
skb->dev = dst->dev;
icmpv6_send(skb, ICMPV6_TIME_EXCEED,
ICMPV6_EXC_HOPLIMIT, 0);
__IP6_INC_STATS(net, ip6_dst_idev(dst),
IPSTATS_MIB_INHDRERRORS);
return false;
}
/* don't propagate ttl change to cloned packets */
if (!skb_make_writable(skb, sizeof(struct ipv6hdr)))
return false;
ipv6_hdr(skb)->hop_limit--;
} else
#endif
{
if (ip_hdr(skb)->ttl <= 1) {
/* Tell the sender its packet died... */
__IP_INC_STATS(net, IPSTATS_MIB_INHDRERRORS);
icmp_send(skb, ICMP_TIME_EXCEEDED, ICMP_EXC_TTL, 0);
return false;
}
/* don't propagate ttl change to cloned packets */
if (!skb_make_writable(skb, sizeof(struct iphdr)))
return false;
/* Decrease ttl */
ip_decrease_ttl(ip_hdr(skb));
}
return true;
}
/* Get route to destination or remote server */ /* Get route to destination or remote server */
static int static int
__ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb, __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
...@@ -326,6 +374,9 @@ __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb, ...@@ -326,6 +374,9 @@ __ip_vs_get_out_rt(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
return local; return local;
} }
if (!decrement_ttl(ipvs, skb_af, skb))
goto err_put;
if (likely(!(rt_mode & IP_VS_RT_MODE_TUNNEL))) { if (likely(!(rt_mode & IP_VS_RT_MODE_TUNNEL))) {
mtu = dst_mtu(&rt->dst); mtu = dst_mtu(&rt->dst);
} else { } else {
...@@ -473,6 +524,9 @@ __ip_vs_get_out_rt_v6(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb, ...@@ -473,6 +524,9 @@ __ip_vs_get_out_rt_v6(struct netns_ipvs *ipvs, int skb_af, struct sk_buff *skb,
return local; return local;
} }
if (!decrement_ttl(ipvs, skb_af, skb))
goto err_put;
/* MTU checking */ /* MTU checking */
if (likely(!(rt_mode & IP_VS_RT_MODE_TUNNEL))) if (likely(!(rt_mode & IP_VS_RT_MODE_TUNNEL)))
mtu = dst_mtu(&rt->dst); mtu = dst_mtu(&rt->dst);
......
...@@ -125,6 +125,54 @@ void nf_ct_l3proto_module_put(unsigned short l3proto) ...@@ -125,6 +125,54 @@ void nf_ct_l3proto_module_put(unsigned short l3proto)
} }
EXPORT_SYMBOL_GPL(nf_ct_l3proto_module_put); EXPORT_SYMBOL_GPL(nf_ct_l3proto_module_put);
int nf_ct_netns_get(struct net *net, u8 nfproto)
{
const struct nf_conntrack_l3proto *l3proto;
int ret;
might_sleep();
ret = nf_ct_l3proto_try_module_get(nfproto);
if (ret < 0)
return ret;
/* we already have a reference, can't fail */
rcu_read_lock();
l3proto = __nf_ct_l3proto_find(nfproto);
rcu_read_unlock();
if (!l3proto->net_ns_get)
return 0;
ret = l3proto->net_ns_get(net);
if (ret < 0)
nf_ct_l3proto_module_put(nfproto);
return ret;
}
EXPORT_SYMBOL_GPL(nf_ct_netns_get);
void nf_ct_netns_put(struct net *net, u8 nfproto)
{
const struct nf_conntrack_l3proto *l3proto;
might_sleep();
/* same as nf_conntrack_netns_get(), reference assumed */
rcu_read_lock();
l3proto = __nf_ct_l3proto_find(nfproto);
rcu_read_unlock();
if (WARN_ON(!l3proto))
return;
if (l3proto->net_ns_put)
l3proto->net_ns_put(net);
nf_ct_l3proto_module_put(nfproto);
}
EXPORT_SYMBOL_GPL(nf_ct_netns_put);
struct nf_conntrack_l4proto * struct nf_conntrack_l4proto *
nf_ct_l4proto_find_get(u_int16_t l3num, u_int8_t l4num) nf_ct_l4proto_find_get(u_int16_t l3num, u_int8_t l4num)
{ {
...@@ -190,20 +238,19 @@ int nf_ct_l3proto_register(struct nf_conntrack_l3proto *proto) ...@@ -190,20 +238,19 @@ int nf_ct_l3proto_register(struct nf_conntrack_l3proto *proto)
} }
EXPORT_SYMBOL_GPL(nf_ct_l3proto_register); EXPORT_SYMBOL_GPL(nf_ct_l3proto_register);
#ifdef CONFIG_SYSCTL
extern unsigned int nf_conntrack_default_on;
int nf_ct_l3proto_pernet_register(struct net *net, int nf_ct_l3proto_pernet_register(struct net *net,
struct nf_conntrack_l3proto *proto) struct nf_conntrack_l3proto *proto)
{ {
int ret; if (nf_conntrack_default_on == 0)
return 0;
if (proto->init_net) {
ret = proto->init_net(net);
if (ret < 0)
return ret;
}
return 0; return proto->net_ns_get ? proto->net_ns_get(net) : 0;
} }
EXPORT_SYMBOL_GPL(nf_ct_l3proto_pernet_register); EXPORT_SYMBOL_GPL(nf_ct_l3proto_pernet_register);
#endif
void nf_ct_l3proto_unregister(struct nf_conntrack_l3proto *proto) void nf_ct_l3proto_unregister(struct nf_conntrack_l3proto *proto)
{ {
...@@ -224,6 +271,16 @@ EXPORT_SYMBOL_GPL(nf_ct_l3proto_unregister); ...@@ -224,6 +271,16 @@ EXPORT_SYMBOL_GPL(nf_ct_l3proto_unregister);
void nf_ct_l3proto_pernet_unregister(struct net *net, void nf_ct_l3proto_pernet_unregister(struct net *net,
struct nf_conntrack_l3proto *proto) struct nf_conntrack_l3proto *proto)
{ {
/*
* nf_conntrack_default_on *might* have registered hooks.
* ->net_ns_put must cope with more puts() than get(), i.e.
* if nf_conntrack_default_on was 0 at time of
* nf_ct_l3proto_pernet_register invocation this net_ns_put()
* should be a noop.
*/
if (proto->net_ns_put)
proto->net_ns_put(net);
/* Remove all contrack entries for this protocol */ /* Remove all contrack entries for this protocol */
nf_ct_iterate_cleanup(net, kill_l3proto, proto, 0, 0); nf_ct_iterate_cleanup(net, kill_l3proto, proto, 0, 0);
} }
......
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
* *
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/sysctl.h> #include <linux/sysctl.h>
#include <linux/spinlock.h> #include <linux/spinlock.h>
...@@ -384,17 +383,9 @@ dccp_state_table[CT_DCCP_ROLE_MAX + 1][DCCP_PKT_SYNCACK + 1][CT_DCCP_MAX + 1] = ...@@ -384,17 +383,9 @@ dccp_state_table[CT_DCCP_ROLE_MAX + 1][DCCP_PKT_SYNCACK + 1][CT_DCCP_MAX + 1] =
}, },
}; };
/* this module per-net specifics */ static inline struct nf_dccp_net *dccp_pernet(struct net *net)
static unsigned int dccp_net_id __read_mostly;
struct dccp_net {
struct nf_proto_net pn;
int dccp_loose;
unsigned int dccp_timeout[CT_DCCP_MAX + 1];
};
static inline struct dccp_net *dccp_pernet(struct net *net)
{ {
return net_generic(net, dccp_net_id); return &net->ct.nf_ct_proto.dccp;
} }
static bool dccp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, static bool dccp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff,
...@@ -424,7 +415,7 @@ static bool dccp_new(struct nf_conn *ct, const struct sk_buff *skb, ...@@ -424,7 +415,7 @@ static bool dccp_new(struct nf_conn *ct, const struct sk_buff *skb,
unsigned int dataoff, unsigned int *timeouts) unsigned int dataoff, unsigned int *timeouts)
{ {
struct net *net = nf_ct_net(ct); struct net *net = nf_ct_net(ct);
struct dccp_net *dn; struct nf_dccp_net *dn;
struct dccp_hdr _dh, *dh; struct dccp_hdr _dh, *dh;
const char *msg; const char *msg;
u_int8_t state; u_int8_t state;
...@@ -719,7 +710,7 @@ static int dccp_nlattr_size(void) ...@@ -719,7 +710,7 @@ static int dccp_nlattr_size(void)
static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[], static int dccp_timeout_nlattr_to_obj(struct nlattr *tb[],
struct net *net, void *data) struct net *net, void *data)
{ {
struct dccp_net *dn = dccp_pernet(net); struct nf_dccp_net *dn = dccp_pernet(net);
unsigned int *timeouts = data; unsigned int *timeouts = data;
int i; int i;
...@@ -820,7 +811,7 @@ static struct ctl_table dccp_sysctl_table[] = { ...@@ -820,7 +811,7 @@ static struct ctl_table dccp_sysctl_table[] = {
#endif /* CONFIG_SYSCTL */ #endif /* CONFIG_SYSCTL */
static int dccp_kmemdup_sysctl_table(struct net *net, struct nf_proto_net *pn, static int dccp_kmemdup_sysctl_table(struct net *net, struct nf_proto_net *pn,
struct dccp_net *dn) struct nf_dccp_net *dn)
{ {
#ifdef CONFIG_SYSCTL #ifdef CONFIG_SYSCTL
if (pn->ctl_table) if (pn->ctl_table)
...@@ -850,7 +841,7 @@ static int dccp_kmemdup_sysctl_table(struct net *net, struct nf_proto_net *pn, ...@@ -850,7 +841,7 @@ static int dccp_kmemdup_sysctl_table(struct net *net, struct nf_proto_net *pn,
static int dccp_init_net(struct net *net, u_int16_t proto) static int dccp_init_net(struct net *net, u_int16_t proto)
{ {
struct dccp_net *dn = dccp_pernet(net); struct nf_dccp_net *dn = dccp_pernet(net);
struct nf_proto_net *pn = &dn->pn; struct nf_proto_net *pn = &dn->pn;
if (!pn->users) { if (!pn->users) {
...@@ -868,7 +859,7 @@ static int dccp_init_net(struct net *net, u_int16_t proto) ...@@ -868,7 +859,7 @@ static int dccp_init_net(struct net *net, u_int16_t proto)
return dccp_kmemdup_sysctl_table(net, pn, dn); return dccp_kmemdup_sysctl_table(net, pn, dn);
} }
static struct nf_conntrack_l4proto dccp_proto4 __read_mostly = { struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp4 __read_mostly = {
.l3proto = AF_INET, .l3proto = AF_INET,
.l4proto = IPPROTO_DCCP, .l4proto = IPPROTO_DCCP,
.name = "dccp", .name = "dccp",
...@@ -898,11 +889,11 @@ static struct nf_conntrack_l4proto dccp_proto4 __read_mostly = { ...@@ -898,11 +889,11 @@ static struct nf_conntrack_l4proto dccp_proto4 __read_mostly = {
.nla_policy = dccp_timeout_nla_policy, .nla_policy = dccp_timeout_nla_policy,
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
.net_id = &dccp_net_id,
.init_net = dccp_init_net, .init_net = dccp_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_dccp4);
static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = { struct nf_conntrack_l4proto nf_conntrack_l4proto_dccp6 __read_mostly = {
.l3proto = AF_INET6, .l3proto = AF_INET6,
.l4proto = IPPROTO_DCCP, .l4proto = IPPROTO_DCCP,
.name = "dccp", .name = "dccp",
...@@ -932,56 +923,6 @@ static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = { ...@@ -932,56 +923,6 @@ static struct nf_conntrack_l4proto dccp_proto6 __read_mostly = {
.nla_policy = dccp_timeout_nla_policy, .nla_policy = dccp_timeout_nla_policy,
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
.net_id = &dccp_net_id,
.init_net = dccp_init_net, .init_net = dccp_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_dccp6);
static struct nf_conntrack_l4proto *dccp_proto[] = {
&dccp_proto4,
&dccp_proto6,
};
static __net_init int dccp_net_init(struct net *net)
{
return nf_ct_l4proto_pernet_register(net, dccp_proto,
ARRAY_SIZE(dccp_proto));
}
static __net_exit void dccp_net_exit(struct net *net)
{
nf_ct_l4proto_pernet_unregister(net, dccp_proto,
ARRAY_SIZE(dccp_proto));
}
static struct pernet_operations dccp_net_ops = {
.init = dccp_net_init,
.exit = dccp_net_exit,
.id = &dccp_net_id,
.size = sizeof(struct dccp_net),
};
static int __init nf_conntrack_proto_dccp_init(void)
{
int ret;
ret = register_pernet_subsys(&dccp_net_ops);
if (ret < 0)
return ret;
ret = nf_ct_l4proto_register(dccp_proto, ARRAY_SIZE(dccp_proto));
if (ret < 0)
unregister_pernet_subsys(&dccp_net_ops);
return ret;
}
static void __exit nf_conntrack_proto_dccp_fini(void)
{
nf_ct_l4proto_unregister(dccp_proto, ARRAY_SIZE(dccp_proto));
unregister_pernet_subsys(&dccp_net_ops);
}
module_init(nf_conntrack_proto_dccp_init);
module_exit(nf_conntrack_proto_dccp_fini);
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
MODULE_DESCRIPTION("DCCP connection tracking protocol helper");
MODULE_LICENSE("GPL");
...@@ -15,7 +15,6 @@ ...@@ -15,7 +15,6 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/netfilter.h> #include <linux/netfilter.h>
#include <linux/module.h>
#include <linux/in.h> #include <linux/in.h>
#include <linux/ip.h> #include <linux/ip.h>
#include <linux/sctp.h> #include <linux/sctp.h>
...@@ -144,15 +143,9 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = { ...@@ -144,15 +143,9 @@ static const u8 sctp_conntracks[2][11][SCTP_CONNTRACK_MAX] = {
} }
}; };
static unsigned int sctp_net_id __read_mostly; static inline struct nf_sctp_net *sctp_pernet(struct net *net)
struct sctp_net {
struct nf_proto_net pn;
unsigned int timeouts[SCTP_CONNTRACK_MAX];
};
static inline struct sctp_net *sctp_pernet(struct net *net)
{ {
return net_generic(net, sctp_net_id); return &net->ct.nf_ct_proto.sctp;
} }
static bool sctp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff, static bool sctp_pkt_to_tuple(const struct sk_buff *skb, unsigned int dataoff,
...@@ -600,7 +593,7 @@ static int sctp_timeout_nlattr_to_obj(struct nlattr *tb[], ...@@ -600,7 +593,7 @@ static int sctp_timeout_nlattr_to_obj(struct nlattr *tb[],
struct net *net, void *data) struct net *net, void *data)
{ {
unsigned int *timeouts = data; unsigned int *timeouts = data;
struct sctp_net *sn = sctp_pernet(net); struct nf_sctp_net *sn = sctp_pernet(net);
int i; int i;
/* set default SCTP timeouts. */ /* set default SCTP timeouts. */
...@@ -708,7 +701,7 @@ static struct ctl_table sctp_sysctl_table[] = { ...@@ -708,7 +701,7 @@ static struct ctl_table sctp_sysctl_table[] = {
#endif #endif
static int sctp_kmemdup_sysctl_table(struct nf_proto_net *pn, static int sctp_kmemdup_sysctl_table(struct nf_proto_net *pn,
struct sctp_net *sn) struct nf_sctp_net *sn)
{ {
#ifdef CONFIG_SYSCTL #ifdef CONFIG_SYSCTL
if (pn->ctl_table) if (pn->ctl_table)
...@@ -735,7 +728,7 @@ static int sctp_kmemdup_sysctl_table(struct nf_proto_net *pn, ...@@ -735,7 +728,7 @@ static int sctp_kmemdup_sysctl_table(struct nf_proto_net *pn,
static int sctp_init_net(struct net *net, u_int16_t proto) static int sctp_init_net(struct net *net, u_int16_t proto)
{ {
struct sctp_net *sn = sctp_pernet(net); struct nf_sctp_net *sn = sctp_pernet(net);
struct nf_proto_net *pn = &sn->pn; struct nf_proto_net *pn = &sn->pn;
if (!pn->users) { if (!pn->users) {
...@@ -748,7 +741,7 @@ static int sctp_init_net(struct net *net, u_int16_t proto) ...@@ -748,7 +741,7 @@ static int sctp_init_net(struct net *net, u_int16_t proto)
return sctp_kmemdup_sysctl_table(pn, sn); return sctp_kmemdup_sysctl_table(pn, sn);
} }
static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = { struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = {
.l3proto = PF_INET, .l3proto = PF_INET,
.l4proto = IPPROTO_SCTP, .l4proto = IPPROTO_SCTP,
.name = "sctp", .name = "sctp",
...@@ -778,11 +771,11 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = { ...@@ -778,11 +771,11 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp4 __read_mostly = {
.nla_policy = sctp_timeout_nla_policy, .nla_policy = sctp_timeout_nla_policy,
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
.net_id = &sctp_net_id,
.init_net = sctp_init_net, .init_net = sctp_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_sctp4);
static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = {
.l3proto = PF_INET6, .l3proto = PF_INET6,
.l4proto = IPPROTO_SCTP, .l4proto = IPPROTO_SCTP,
.name = "sctp", .name = "sctp",
...@@ -812,57 +805,6 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = { ...@@ -812,57 +805,6 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_sctp6 __read_mostly = {
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
#endif #endif
.net_id = &sctp_net_id,
.init_net = sctp_init_net, .init_net = sctp_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_sctp6);
static struct nf_conntrack_l4proto *sctp_proto[] = {
&nf_conntrack_l4proto_sctp4,
&nf_conntrack_l4proto_sctp6,
};
static int sctp_net_init(struct net *net)
{
return nf_ct_l4proto_pernet_register(net, sctp_proto,
ARRAY_SIZE(sctp_proto));
}
static void sctp_net_exit(struct net *net)
{
nf_ct_l4proto_pernet_unregister(net, sctp_proto,
ARRAY_SIZE(sctp_proto));
}
static struct pernet_operations sctp_net_ops = {
.init = sctp_net_init,
.exit = sctp_net_exit,
.id = &sctp_net_id,
.size = sizeof(struct sctp_net),
};
static int __init nf_conntrack_proto_sctp_init(void)
{
int ret;
ret = register_pernet_subsys(&sctp_net_ops);
if (ret < 0)
return ret;
ret = nf_ct_l4proto_register(sctp_proto, ARRAY_SIZE(sctp_proto));
if (ret < 0)
unregister_pernet_subsys(&sctp_net_ops);
return ret;
}
static void __exit nf_conntrack_proto_sctp_fini(void)
{
nf_ct_l4proto_unregister(sctp_proto, ARRAY_SIZE(sctp_proto));
unregister_pernet_subsys(&sctp_net_ops);
}
module_init(nf_conntrack_proto_sctp_init);
module_exit(nf_conntrack_proto_sctp_fini);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Kiran Kumar Immidi");
MODULE_DESCRIPTION("Netfilter connection tracking protocol helper for SCTP");
MODULE_ALIAS("ip_conntrack_proto_sctp");
...@@ -9,7 +9,6 @@ ...@@ -9,7 +9,6 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/timer.h> #include <linux/timer.h>
#include <linux/module.h>
#include <linux/udp.h> #include <linux/udp.h>
#include <linux/seq_file.h> #include <linux/seq_file.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
...@@ -24,26 +23,14 @@ ...@@ -24,26 +23,14 @@
#include <net/netfilter/nf_conntrack_ecache.h> #include <net/netfilter/nf_conntrack_ecache.h>
#include <net/netfilter/nf_log.h> #include <net/netfilter/nf_log.h>
enum udplite_conntrack {
UDPLITE_CT_UNREPLIED,
UDPLITE_CT_REPLIED,
UDPLITE_CT_MAX
};
static unsigned int udplite_timeouts[UDPLITE_CT_MAX] = { static unsigned int udplite_timeouts[UDPLITE_CT_MAX] = {
[UDPLITE_CT_UNREPLIED] = 30*HZ, [UDPLITE_CT_UNREPLIED] = 30*HZ,
[UDPLITE_CT_REPLIED] = 180*HZ, [UDPLITE_CT_REPLIED] = 180*HZ,
}; };
static unsigned int udplite_net_id __read_mostly; static inline struct nf_udplite_net *udplite_pernet(struct net *net)
struct udplite_net {
struct nf_proto_net pn;
unsigned int timeouts[UDPLITE_CT_MAX];
};
static inline struct udplite_net *udplite_pernet(struct net *net)
{ {
return net_generic(net, udplite_net_id); return &net->ct.nf_ct_proto.udplite;
} }
static bool udplite_pkt_to_tuple(const struct sk_buff *skb, static bool udplite_pkt_to_tuple(const struct sk_buff *skb,
...@@ -178,7 +165,7 @@ static int udplite_timeout_nlattr_to_obj(struct nlattr *tb[], ...@@ -178,7 +165,7 @@ static int udplite_timeout_nlattr_to_obj(struct nlattr *tb[],
struct net *net, void *data) struct net *net, void *data)
{ {
unsigned int *timeouts = data; unsigned int *timeouts = data;
struct udplite_net *un = udplite_pernet(net); struct nf_udplite_net *un = udplite_pernet(net);
/* set default timeouts for UDPlite. */ /* set default timeouts for UDPlite. */
timeouts[UDPLITE_CT_UNREPLIED] = un->timeouts[UDPLITE_CT_UNREPLIED]; timeouts[UDPLITE_CT_UNREPLIED] = un->timeouts[UDPLITE_CT_UNREPLIED];
...@@ -237,7 +224,7 @@ static struct ctl_table udplite_sysctl_table[] = { ...@@ -237,7 +224,7 @@ static struct ctl_table udplite_sysctl_table[] = {
#endif /* CONFIG_SYSCTL */ #endif /* CONFIG_SYSCTL */
static int udplite_kmemdup_sysctl_table(struct nf_proto_net *pn, static int udplite_kmemdup_sysctl_table(struct nf_proto_net *pn,
struct udplite_net *un) struct nf_udplite_net *un)
{ {
#ifdef CONFIG_SYSCTL #ifdef CONFIG_SYSCTL
if (pn->ctl_table) if (pn->ctl_table)
...@@ -257,7 +244,7 @@ static int udplite_kmemdup_sysctl_table(struct nf_proto_net *pn, ...@@ -257,7 +244,7 @@ static int udplite_kmemdup_sysctl_table(struct nf_proto_net *pn,
static int udplite_init_net(struct net *net, u_int16_t proto) static int udplite_init_net(struct net *net, u_int16_t proto)
{ {
struct udplite_net *un = udplite_pernet(net); struct nf_udplite_net *un = udplite_pernet(net);
struct nf_proto_net *pn = &un->pn; struct nf_proto_net *pn = &un->pn;
if (!pn->users) { if (!pn->users) {
...@@ -270,7 +257,7 @@ static int udplite_init_net(struct net *net, u_int16_t proto) ...@@ -270,7 +257,7 @@ static int udplite_init_net(struct net *net, u_int16_t proto)
return udplite_kmemdup_sysctl_table(pn, un); return udplite_kmemdup_sysctl_table(pn, un);
} }
static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly = struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly =
{ {
.l3proto = PF_INET, .l3proto = PF_INET,
.l4proto = IPPROTO_UDPLITE, .l4proto = IPPROTO_UDPLITE,
...@@ -299,11 +286,11 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly = ...@@ -299,11 +286,11 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite4 __read_mostly =
.nla_policy = udplite_timeout_nla_policy, .nla_policy = udplite_timeout_nla_policy,
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
.net_id = &udplite_net_id,
.init_net = udplite_init_net, .init_net = udplite_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_udplite4);
static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly =
{ {
.l3proto = PF_INET6, .l3proto = PF_INET6,
.l4proto = IPPROTO_UDPLITE, .l4proto = IPPROTO_UDPLITE,
...@@ -332,54 +319,6 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly = ...@@ -332,54 +319,6 @@ static struct nf_conntrack_l4proto nf_conntrack_l4proto_udplite6 __read_mostly =
.nla_policy = udplite_timeout_nla_policy, .nla_policy = udplite_timeout_nla_policy,
}, },
#endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */ #endif /* CONFIG_NF_CT_NETLINK_TIMEOUT */
.net_id = &udplite_net_id,
.init_net = udplite_init_net, .init_net = udplite_init_net,
}; };
EXPORT_SYMBOL_GPL(nf_conntrack_l4proto_udplite6);
static struct nf_conntrack_l4proto *udplite_proto[] = {
&nf_conntrack_l4proto_udplite4,
&nf_conntrack_l4proto_udplite6,
};
static int udplite_net_init(struct net *net)
{
return nf_ct_l4proto_pernet_register(net, udplite_proto,
ARRAY_SIZE(udplite_proto));
}
static void udplite_net_exit(struct net *net)
{
nf_ct_l4proto_pernet_unregister(net, udplite_proto,
ARRAY_SIZE(udplite_proto));
}
static struct pernet_operations udplite_net_ops = {
.init = udplite_net_init,
.exit = udplite_net_exit,
.id = &udplite_net_id,
.size = sizeof(struct udplite_net),
};
static int __init nf_conntrack_proto_udplite_init(void)
{
int ret;
ret = register_pernet_subsys(&udplite_net_ops);
if (ret < 0)
return ret;
ret = nf_ct_l4proto_register(udplite_proto, ARRAY_SIZE(udplite_proto));
if (ret < 0)
unregister_pernet_subsys(&udplite_net_ops);
return ret;
}
static void __exit nf_conntrack_proto_udplite_exit(void)
{
nf_ct_l4proto_unregister(udplite_proto, ARRAY_SIZE(udplite_proto));
unregister_pernet_subsys(&udplite_net_ops);
}
module_init(nf_conntrack_proto_udplite_init);
module_exit(nf_conntrack_proto_udplite_exit);
MODULE_LICENSE("GPL");
...@@ -452,6 +452,9 @@ static int log_invalid_proto_max __read_mostly = 255; ...@@ -452,6 +452,9 @@ static int log_invalid_proto_max __read_mostly = 255;
/* size the user *wants to set */ /* size the user *wants to set */
static unsigned int nf_conntrack_htable_size_user __read_mostly; static unsigned int nf_conntrack_htable_size_user __read_mostly;
extern unsigned int nf_conntrack_default_on;
unsigned int nf_conntrack_default_on __read_mostly = 1;
static int static int
nf_conntrack_hash_sysctl(struct ctl_table *table, int write, nf_conntrack_hash_sysctl(struct ctl_table *table, int write,
void __user *buffer, size_t *lenp, loff_t *ppos) void __user *buffer, size_t *lenp, loff_t *ppos)
...@@ -517,6 +520,13 @@ static struct ctl_table nf_ct_sysctl_table[] = { ...@@ -517,6 +520,13 @@ static struct ctl_table nf_ct_sysctl_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = proc_dointvec, .proc_handler = proc_dointvec,
}, },
{
.procname = "nf_conntrack_default_on",
.data = &nf_conntrack_default_on,
.maxlen = sizeof(unsigned int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
{ } { }
}; };
......
...@@ -14,6 +14,29 @@ ...@@ -14,6 +14,29 @@
#include <linux/netfilter/nf_tables.h> #include <linux/netfilter/nf_tables.h>
#include <net/netfilter/nf_tables.h> #include <net/netfilter/nf_tables.h>
static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev)
{
if (skb_mac_header_was_set(skb))
skb_push(skb, skb->mac_len);
skb->dev = dev;
dev_queue_xmit(skb);
}
void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif)
{
struct net_device *dev;
dev = dev_get_by_index_rcu(nft_net(pkt), oif);
if (!dev) {
kfree_skb(pkt->skb);
return;
}
nf_do_netdev_egress(pkt->skb, dev);
}
EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress);
void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif) void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif)
{ {
struct net_device *dev; struct net_device *dev;
...@@ -24,14 +47,8 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif) ...@@ -24,14 +47,8 @@ void nf_dup_netdev_egress(const struct nft_pktinfo *pkt, int oif)
return; return;
skb = skb_clone(pkt->skb, GFP_ATOMIC); skb = skb_clone(pkt->skb, GFP_ATOMIC);
if (skb == NULL) if (skb)
return; nf_do_netdev_egress(skb, dev);
if (skb_mac_header_was_set(skb))
skb_push(skb, skb->mac_len);
skb->dev = dev;
dev_queue_xmit(skb);
} }
EXPORT_SYMBOL_GPL(nf_dup_netdev_egress); EXPORT_SYMBOL_GPL(nf_dup_netdev_egress);
......
...@@ -177,6 +177,7 @@ EXPORT_SYMBOL_GPL(nf_log_dump_packet_common); ...@@ -177,6 +177,7 @@ EXPORT_SYMBOL_GPL(nf_log_dump_packet_common);
/* bridge and netdev logging families share this code. */ /* bridge and netdev logging families share this code. */
void nf_log_l2packet(struct net *net, u_int8_t pf, void nf_log_l2packet(struct net *net, u_int8_t pf,
__be16 protocol,
unsigned int hooknum, unsigned int hooknum,
const struct sk_buff *skb, const struct sk_buff *skb,
const struct net_device *in, const struct net_device *in,
...@@ -184,7 +185,7 @@ void nf_log_l2packet(struct net *net, u_int8_t pf, ...@@ -184,7 +185,7 @@ void nf_log_l2packet(struct net *net, u_int8_t pf,
const struct nf_loginfo *loginfo, const struct nf_loginfo *loginfo,
const char *prefix) const char *prefix)
{ {
switch (eth_hdr(skb)->h_proto) { switch (protocol) {
case htons(ETH_P_IP): case htons(ETH_P_IP):
nf_log_packet(net, NFPROTO_IPV4, hooknum, skb, in, out, nf_log_packet(net, NFPROTO_IPV4, hooknum, skb, in, out,
loginfo, "%s", prefix); loginfo, "%s", prefix);
......
...@@ -23,7 +23,8 @@ static void nf_log_netdev_packet(struct net *net, u_int8_t pf, ...@@ -23,7 +23,8 @@ static void nf_log_netdev_packet(struct net *net, u_int8_t pf,
const struct nf_loginfo *loginfo, const struct nf_loginfo *loginfo,
const char *prefix) const char *prefix)
{ {
nf_log_l2packet(net, pf, hooknum, skb, in, out, loginfo, prefix); nf_log_l2packet(net, pf, skb->protocol, hooknum, skb, in, out,
loginfo, prefix);
} }
static struct nf_logger nf_netdev_logger __read_mostly = { static struct nf_logger nf_netdev_logger __read_mostly = {
......
...@@ -682,6 +682,18 @@ int nf_nat_l3proto_register(const struct nf_nat_l3proto *l3proto) ...@@ -682,6 +682,18 @@ int nf_nat_l3proto_register(const struct nf_nat_l3proto *l3proto)
&nf_nat_l4proto_tcp); &nf_nat_l4proto_tcp);
RCU_INIT_POINTER(nf_nat_l4protos[l3proto->l3proto][IPPROTO_UDP], RCU_INIT_POINTER(nf_nat_l4protos[l3proto->l3proto][IPPROTO_UDP],
&nf_nat_l4proto_udp); &nf_nat_l4proto_udp);
#ifdef CONFIG_NF_NAT_PROTO_DCCP
RCU_INIT_POINTER(nf_nat_l4protos[l3proto->l3proto][IPPROTO_DCCP],
&nf_nat_l4proto_dccp);
#endif
#ifdef CONFIG_NF_NAT_PROTO_SCTP
RCU_INIT_POINTER(nf_nat_l4protos[l3proto->l3proto][IPPROTO_SCTP],
&nf_nat_l4proto_sctp);
#endif
#ifdef CONFIG_NF_NAT_PROTO_UDPLITE
RCU_INIT_POINTER(nf_nat_l4protos[l3proto->l3proto][IPPROTO_UDPLITE],
&nf_nat_l4proto_udplite);
#endif
mutex_unlock(&nf_nat_proto_mutex); mutex_unlock(&nf_nat_proto_mutex);
RCU_INIT_POINTER(nf_nat_l3protos[l3proto->l3proto], l3proto); RCU_INIT_POINTER(nf_nat_l3protos[l3proto->l3proto], l3proto);
......
...@@ -10,8 +10,6 @@ ...@@ -10,8 +10,6 @@
*/ */
#include <linux/kernel.h> #include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/dccp.h> #include <linux/dccp.h>
...@@ -73,7 +71,7 @@ dccp_manip_pkt(struct sk_buff *skb, ...@@ -73,7 +71,7 @@ dccp_manip_pkt(struct sk_buff *skb,
return true; return true;
} }
static const struct nf_nat_l4proto nf_nat_l4proto_dccp = { const struct nf_nat_l4proto nf_nat_l4proto_dccp = {
.l4proto = IPPROTO_DCCP, .l4proto = IPPROTO_DCCP,
.manip_pkt = dccp_manip_pkt, .manip_pkt = dccp_manip_pkt,
.in_range = nf_nat_l4proto_in_range, .in_range = nf_nat_l4proto_in_range,
...@@ -82,35 +80,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_dccp = { ...@@ -82,35 +80,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_dccp = {
.nlattr_to_range = nf_nat_l4proto_nlattr_to_range, .nlattr_to_range = nf_nat_l4proto_nlattr_to_range,
#endif #endif
}; };
static int __init nf_nat_proto_dccp_init(void)
{
int err;
err = nf_nat_l4proto_register(NFPROTO_IPV4, &nf_nat_l4proto_dccp);
if (err < 0)
goto err1;
err = nf_nat_l4proto_register(NFPROTO_IPV6, &nf_nat_l4proto_dccp);
if (err < 0)
goto err2;
return 0;
err2:
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_dccp);
err1:
return err;
}
static void __exit nf_nat_proto_dccp_fini(void)
{
nf_nat_l4proto_unregister(NFPROTO_IPV6, &nf_nat_l4proto_dccp);
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_dccp);
}
module_init(nf_nat_proto_dccp_init);
module_exit(nf_nat_proto_dccp_fini);
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
MODULE_DESCRIPTION("DCCP NAT protocol helper");
MODULE_LICENSE("GPL");
...@@ -7,9 +7,7 @@ ...@@ -7,9 +7,7 @@
*/ */
#include <linux/types.h> #include <linux/types.h>
#include <linux/init.h>
#include <linux/sctp.h> #include <linux/sctp.h>
#include <linux/module.h>
#include <net/sctp/checksum.h> #include <net/sctp/checksum.h>
#include <net/netfilter/nf_nat_l4proto.h> #include <net/netfilter/nf_nat_l4proto.h>
...@@ -49,12 +47,15 @@ sctp_manip_pkt(struct sk_buff *skb, ...@@ -49,12 +47,15 @@ sctp_manip_pkt(struct sk_buff *skb,
hdr->dest = tuple->dst.u.sctp.port; hdr->dest = tuple->dst.u.sctp.port;
} }
hdr->checksum = sctp_compute_cksum(skb, hdroff); if (skb->ip_summed != CHECKSUM_PARTIAL) {
hdr->checksum = sctp_compute_cksum(skb, hdroff);
skb->ip_summed = CHECKSUM_NONE;
}
return true; return true;
} }
static const struct nf_nat_l4proto nf_nat_l4proto_sctp = { const struct nf_nat_l4proto nf_nat_l4proto_sctp = {
.l4proto = IPPROTO_SCTP, .l4proto = IPPROTO_SCTP,
.manip_pkt = sctp_manip_pkt, .manip_pkt = sctp_manip_pkt,
.in_range = nf_nat_l4proto_in_range, .in_range = nf_nat_l4proto_in_range,
...@@ -63,34 +64,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_sctp = { ...@@ -63,34 +64,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_sctp = {
.nlattr_to_range = nf_nat_l4proto_nlattr_to_range, .nlattr_to_range = nf_nat_l4proto_nlattr_to_range,
#endif #endif
}; };
static int __init nf_nat_proto_sctp_init(void)
{
int err;
err = nf_nat_l4proto_register(NFPROTO_IPV4, &nf_nat_l4proto_sctp);
if (err < 0)
goto err1;
err = nf_nat_l4proto_register(NFPROTO_IPV6, &nf_nat_l4proto_sctp);
if (err < 0)
goto err2;
return 0;
err2:
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_sctp);
err1:
return err;
}
static void __exit nf_nat_proto_sctp_exit(void)
{
nf_nat_l4proto_unregister(NFPROTO_IPV6, &nf_nat_l4proto_sctp);
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_sctp);
}
module_init(nf_nat_proto_sctp_init);
module_exit(nf_nat_proto_sctp_exit);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("SCTP NAT protocol helper");
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
...@@ -8,11 +8,9 @@ ...@@ -8,11 +8,9 @@
*/ */
#include <linux/types.h> #include <linux/types.h>
#include <linux/init.h>
#include <linux/udp.h> #include <linux/udp.h>
#include <linux/netfilter.h> #include <linux/netfilter.h>
#include <linux/module.h>
#include <net/netfilter/nf_nat.h> #include <net/netfilter/nf_nat.h>
#include <net/netfilter/nf_nat_l3proto.h> #include <net/netfilter/nf_nat_l3proto.h>
#include <net/netfilter/nf_nat_l4proto.h> #include <net/netfilter/nf_nat_l4proto.h>
...@@ -64,7 +62,7 @@ udplite_manip_pkt(struct sk_buff *skb, ...@@ -64,7 +62,7 @@ udplite_manip_pkt(struct sk_buff *skb,
return true; return true;
} }
static const struct nf_nat_l4proto nf_nat_l4proto_udplite = { const struct nf_nat_l4proto nf_nat_l4proto_udplite = {
.l4proto = IPPROTO_UDPLITE, .l4proto = IPPROTO_UDPLITE,
.manip_pkt = udplite_manip_pkt, .manip_pkt = udplite_manip_pkt,
.in_range = nf_nat_l4proto_in_range, .in_range = nf_nat_l4proto_in_range,
...@@ -73,34 +71,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_udplite = { ...@@ -73,34 +71,3 @@ static const struct nf_nat_l4proto nf_nat_l4proto_udplite = {
.nlattr_to_range = nf_nat_l4proto_nlattr_to_range, .nlattr_to_range = nf_nat_l4proto_nlattr_to_range,
#endif #endif
}; };
static int __init nf_nat_proto_udplite_init(void)
{
int err;
err = nf_nat_l4proto_register(NFPROTO_IPV4, &nf_nat_l4proto_udplite);
if (err < 0)
goto err1;
err = nf_nat_l4proto_register(NFPROTO_IPV6, &nf_nat_l4proto_udplite);
if (err < 0)
goto err2;
return 0;
err2:
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_udplite);
err1:
return err;
}
static void __exit nf_nat_proto_udplite_fini(void)
{
nf_nat_l4proto_unregister(NFPROTO_IPV6, &nf_nat_l4proto_udplite);
nf_nat_l4proto_unregister(NFPROTO_IPV4, &nf_nat_l4proto_udplite);
}
module_init(nf_nat_proto_udplite_init);
module_exit(nf_nat_proto_udplite_fini);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("UDP-Lite NAT protocol helper");
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
...@@ -185,7 +185,7 @@ static unsigned int nf_iterate(struct sk_buff *skb, ...@@ -185,7 +185,7 @@ static unsigned int nf_iterate(struct sk_buff *skb,
do { do {
repeat: repeat:
verdict = (*entryp)->ops.hook((*entryp)->ops.priv, skb, state); verdict = nf_hook_entry_hookfn((*entryp), skb, state);
if (verdict != NF_ACCEPT) { if (verdict != NF_ACCEPT) {
if (verdict != NF_REPEAT) if (verdict != NF_REPEAT)
return verdict; return verdict;
...@@ -200,7 +200,6 @@ static unsigned int nf_iterate(struct sk_buff *skb, ...@@ -200,7 +200,6 @@ static unsigned int nf_iterate(struct sk_buff *skb,
void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict) void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict)
{ {
struct nf_hook_entry *hook_entry = entry->hook; struct nf_hook_entry *hook_entry = entry->hook;
struct nf_hook_ops *elem = &hook_entry->ops;
struct sk_buff *skb = entry->skb; struct sk_buff *skb = entry->skb;
const struct nf_afinfo *afinfo; const struct nf_afinfo *afinfo;
int err; int err;
...@@ -209,7 +208,7 @@ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict) ...@@ -209,7 +208,7 @@ void nf_reinject(struct nf_queue_entry *entry, unsigned int verdict)
/* Continue traversal iff userspace said ok... */ /* Continue traversal iff userspace said ok... */
if (verdict == NF_REPEAT) if (verdict == NF_REPEAT)
verdict = elem->hook(elem->priv, skb, &entry->state); verdict = nf_hook_entry_hookfn(hook_entry, skb, &entry->state);
if (verdict == NF_ACCEPT) { if (verdict == NF_ACCEPT) {
afinfo = nf_get_afinfo(entry->state.pf); afinfo = nf_get_afinfo(entry->state.pf);
......
This diff is collapsed.
...@@ -1152,6 +1152,7 @@ MODULE_ALIAS_NF_LOGGER(AF_INET, 1); ...@@ -1152,6 +1152,7 @@ MODULE_ALIAS_NF_LOGGER(AF_INET, 1);
MODULE_ALIAS_NF_LOGGER(AF_INET6, 1); MODULE_ALIAS_NF_LOGGER(AF_INET6, 1);
MODULE_ALIAS_NF_LOGGER(AF_BRIDGE, 1); MODULE_ALIAS_NF_LOGGER(AF_BRIDGE, 1);
MODULE_ALIAS_NF_LOGGER(3, 1); /* NFPROTO_ARP */ MODULE_ALIAS_NF_LOGGER(3, 1); /* NFPROTO_ARP */
MODULE_ALIAS_NF_LOGGER(5, 1); /* NFPROTO_NETDEV */
module_init(nfnetlink_log_init); module_init(nfnetlink_log_init);
module_exit(nfnetlink_log_fini); module_exit(nfnetlink_log_fini);
...@@ -31,11 +31,10 @@ struct nft_counter_percpu_priv { ...@@ -31,11 +31,10 @@ struct nft_counter_percpu_priv {
struct nft_counter_percpu __percpu *counter; struct nft_counter_percpu __percpu *counter;
}; };
static void nft_counter_eval(const struct nft_expr *expr, static inline void nft_counter_do_eval(struct nft_counter_percpu_priv *priv,
struct nft_regs *regs, struct nft_regs *regs,
const struct nft_pktinfo *pkt) const struct nft_pktinfo *pkt)
{ {
struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
struct nft_counter_percpu *this_cpu; struct nft_counter_percpu *this_cpu;
local_bh_disable(); local_bh_disable();
...@@ -47,10 +46,64 @@ static void nft_counter_eval(const struct nft_expr *expr, ...@@ -47,10 +46,64 @@ static void nft_counter_eval(const struct nft_expr *expr,
local_bh_enable(); local_bh_enable();
} }
static void nft_counter_fetch(const struct nft_counter_percpu __percpu *counter, static inline void nft_counter_obj_eval(struct nft_object *obj,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
struct nft_counter_percpu_priv *priv = nft_obj_data(obj);
nft_counter_do_eval(priv, regs, pkt);
}
static int nft_counter_do_init(const struct nlattr * const tb[],
struct nft_counter_percpu_priv *priv)
{
struct nft_counter_percpu __percpu *cpu_stats;
struct nft_counter_percpu *this_cpu;
cpu_stats = netdev_alloc_pcpu_stats(struct nft_counter_percpu);
if (cpu_stats == NULL)
return -ENOMEM;
preempt_disable();
this_cpu = this_cpu_ptr(cpu_stats);
if (tb[NFTA_COUNTER_PACKETS]) {
this_cpu->counter.packets =
be64_to_cpu(nla_get_be64(tb[NFTA_COUNTER_PACKETS]));
}
if (tb[NFTA_COUNTER_BYTES]) {
this_cpu->counter.bytes =
be64_to_cpu(nla_get_be64(tb[NFTA_COUNTER_BYTES]));
}
preempt_enable();
priv->counter = cpu_stats;
return 0;
}
static int nft_counter_obj_init(const struct nlattr * const tb[],
struct nft_object *obj)
{
struct nft_counter_percpu_priv *priv = nft_obj_data(obj);
return nft_counter_do_init(tb, priv);
}
static void nft_counter_do_destroy(struct nft_counter_percpu_priv *priv)
{
free_percpu(priv->counter);
}
static void nft_counter_obj_destroy(struct nft_object *obj)
{
struct nft_counter_percpu_priv *priv = nft_obj_data(obj);
nft_counter_do_destroy(priv);
}
static void nft_counter_fetch(struct nft_counter_percpu __percpu *counter,
struct nft_counter *total) struct nft_counter *total)
{ {
const struct nft_counter_percpu *cpu_stats; struct nft_counter_percpu *cpu_stats;
u64 bytes, packets; u64 bytes, packets;
unsigned int seq; unsigned int seq;
int cpu; int cpu;
...@@ -69,12 +122,52 @@ static void nft_counter_fetch(const struct nft_counter_percpu __percpu *counter, ...@@ -69,12 +122,52 @@ static void nft_counter_fetch(const struct nft_counter_percpu __percpu *counter,
} }
} }
static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr) static u64 __nft_counter_reset(u64 *counter)
{
u64 ret, old;
do {
old = *counter;
ret = cmpxchg64(counter, old, 0);
} while (ret != old);
return ret;
}
static void nft_counter_reset(struct nft_counter_percpu __percpu *counter,
struct nft_counter *total)
{
struct nft_counter_percpu *cpu_stats;
u64 bytes, packets;
unsigned int seq;
int cpu;
memset(total, 0, sizeof(*total));
for_each_possible_cpu(cpu) {
bytes = packets = 0;
cpu_stats = per_cpu_ptr(counter, cpu);
do {
seq = u64_stats_fetch_begin_irq(&cpu_stats->syncp);
packets += __nft_counter_reset(&cpu_stats->counter.packets);
bytes += __nft_counter_reset(&cpu_stats->counter.bytes);
} while (u64_stats_fetch_retry_irq(&cpu_stats->syncp, seq));
total->packets += packets;
total->bytes += bytes;
}
}
static int nft_counter_do_dump(struct sk_buff *skb,
const struct nft_counter_percpu_priv *priv,
bool reset)
{ {
struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
struct nft_counter total; struct nft_counter total;
nft_counter_fetch(priv->counter, &total); if (reset)
nft_counter_reset(priv->counter, &total);
else
nft_counter_fetch(priv->counter, &total);
if (nla_put_be64(skb, NFTA_COUNTER_BYTES, cpu_to_be64(total.bytes), if (nla_put_be64(skb, NFTA_COUNTER_BYTES, cpu_to_be64(total.bytes),
NFTA_COUNTER_PAD) || NFTA_COUNTER_PAD) ||
...@@ -87,36 +180,54 @@ static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -87,36 +180,54 @@ static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static int nft_counter_obj_dump(struct sk_buff *skb,
struct nft_object *obj, bool reset)
{
struct nft_counter_percpu_priv *priv = nft_obj_data(obj);
return nft_counter_do_dump(skb, priv, reset);
}
static const struct nla_policy nft_counter_policy[NFTA_COUNTER_MAX + 1] = { static const struct nla_policy nft_counter_policy[NFTA_COUNTER_MAX + 1] = {
[NFTA_COUNTER_PACKETS] = { .type = NLA_U64 }, [NFTA_COUNTER_PACKETS] = { .type = NLA_U64 },
[NFTA_COUNTER_BYTES] = { .type = NLA_U64 }, [NFTA_COUNTER_BYTES] = { .type = NLA_U64 },
}; };
static struct nft_object_type nft_counter_obj __read_mostly = {
.type = NFT_OBJECT_COUNTER,
.size = sizeof(struct nft_counter_percpu_priv),
.maxattr = NFTA_COUNTER_MAX,
.policy = nft_counter_policy,
.eval = nft_counter_obj_eval,
.init = nft_counter_obj_init,
.destroy = nft_counter_obj_destroy,
.dump = nft_counter_obj_dump,
.owner = THIS_MODULE,
};
static void nft_counter_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
nft_counter_do_eval(priv, regs, pkt);
}
static int nft_counter_dump(struct sk_buff *skb, const struct nft_expr *expr)
{
const struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
return nft_counter_do_dump(skb, priv, false);
}
static int nft_counter_init(const struct nft_ctx *ctx, static int nft_counter_init(const struct nft_ctx *ctx,
const struct nft_expr *expr, const struct nft_expr *expr,
const struct nlattr * const tb[]) const struct nlattr * const tb[])
{ {
struct nft_counter_percpu_priv *priv = nft_expr_priv(expr); struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
struct nft_counter_percpu __percpu *cpu_stats;
struct nft_counter_percpu *this_cpu;
cpu_stats = netdev_alloc_pcpu_stats(struct nft_counter_percpu); return nft_counter_do_init(tb, priv);
if (cpu_stats == NULL)
return -ENOMEM;
preempt_disable();
this_cpu = this_cpu_ptr(cpu_stats);
if (tb[NFTA_COUNTER_PACKETS]) {
this_cpu->counter.packets =
be64_to_cpu(nla_get_be64(tb[NFTA_COUNTER_PACKETS]));
}
if (tb[NFTA_COUNTER_BYTES]) {
this_cpu->counter.bytes =
be64_to_cpu(nla_get_be64(tb[NFTA_COUNTER_BYTES]));
}
preempt_enable();
priv->counter = cpu_stats;
return 0;
} }
static void nft_counter_destroy(const struct nft_ctx *ctx, static void nft_counter_destroy(const struct nft_ctx *ctx,
...@@ -124,7 +235,7 @@ static void nft_counter_destroy(const struct nft_ctx *ctx, ...@@ -124,7 +235,7 @@ static void nft_counter_destroy(const struct nft_ctx *ctx,
{ {
struct nft_counter_percpu_priv *priv = nft_expr_priv(expr); struct nft_counter_percpu_priv *priv = nft_expr_priv(expr);
free_percpu(priv->counter); nft_counter_do_destroy(priv);
} }
static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src) static int nft_counter_clone(struct nft_expr *dst, const struct nft_expr *src)
...@@ -174,12 +285,26 @@ static struct nft_expr_type nft_counter_type __read_mostly = { ...@@ -174,12 +285,26 @@ static struct nft_expr_type nft_counter_type __read_mostly = {
static int __init nft_counter_module_init(void) static int __init nft_counter_module_init(void)
{ {
return nft_register_expr(&nft_counter_type); int err;
err = nft_register_obj(&nft_counter_obj);
if (err < 0)
return err;
err = nft_register_expr(&nft_counter_type);
if (err < 0)
goto err1;
return 0;
err1:
nft_unregister_obj(&nft_counter_obj);
return err;
} }
static void __exit nft_counter_module_exit(void) static void __exit nft_counter_module_exit(void)
{ {
nft_unregister_expr(&nft_counter_type); nft_unregister_expr(&nft_counter_type);
nft_unregister_obj(&nft_counter_obj);
} }
module_init(nft_counter_module_init); module_init(nft_counter_module_init);
...@@ -188,3 +313,4 @@ module_exit(nft_counter_module_exit); ...@@ -188,3 +313,4 @@ module_exit(nft_counter_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>"); MODULE_AUTHOR("Patrick McHardy <kaber@trash.net>");
MODULE_ALIAS_NFT_EXPR("counter"); MODULE_ALIAS_NFT_EXPR("counter");
MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_COUNTER);
...@@ -208,37 +208,37 @@ static const struct nla_policy nft_ct_policy[NFTA_CT_MAX + 1] = { ...@@ -208,37 +208,37 @@ static const struct nla_policy nft_ct_policy[NFTA_CT_MAX + 1] = {
[NFTA_CT_SREG] = { .type = NLA_U32 }, [NFTA_CT_SREG] = { .type = NLA_U32 },
}; };
static int nft_ct_l3proto_try_module_get(uint8_t family) static int nft_ct_netns_get(struct net *net, uint8_t family)
{ {
int err; int err;
if (family == NFPROTO_INET) { if (family == NFPROTO_INET) {
err = nf_ct_l3proto_try_module_get(NFPROTO_IPV4); err = nf_ct_netns_get(net, NFPROTO_IPV4);
if (err < 0) if (err < 0)
goto err1; goto err1;
err = nf_ct_l3proto_try_module_get(NFPROTO_IPV6); err = nf_ct_netns_get(net, NFPROTO_IPV6);
if (err < 0) if (err < 0)
goto err2; goto err2;
} else { } else {
err = nf_ct_l3proto_try_module_get(family); err = nf_ct_netns_get(net, family);
if (err < 0) if (err < 0)
goto err1; goto err1;
} }
return 0; return 0;
err2: err2:
nf_ct_l3proto_module_put(NFPROTO_IPV4); nf_ct_netns_put(net, NFPROTO_IPV4);
err1: err1:
return err; return err;
} }
static void nft_ct_l3proto_module_put(uint8_t family) static void nft_ct_netns_put(struct net *net, uint8_t family)
{ {
if (family == NFPROTO_INET) { if (family == NFPROTO_INET) {
nf_ct_l3proto_module_put(NFPROTO_IPV4); nf_ct_netns_put(net, NFPROTO_IPV4);
nf_ct_l3proto_module_put(NFPROTO_IPV6); nf_ct_netns_put(net, NFPROTO_IPV6);
} else } else
nf_ct_l3proto_module_put(family); nf_ct_netns_put(net, family);
} }
static int nft_ct_get_init(const struct nft_ctx *ctx, static int nft_ct_get_init(const struct nft_ctx *ctx,
...@@ -342,7 +342,7 @@ static int nft_ct_get_init(const struct nft_ctx *ctx, ...@@ -342,7 +342,7 @@ static int nft_ct_get_init(const struct nft_ctx *ctx,
if (err < 0) if (err < 0)
return err; return err;
err = nft_ct_l3proto_try_module_get(ctx->afi->family); err = nft_ct_netns_get(ctx->net, ctx->afi->family);
if (err < 0) if (err < 0)
return err; return err;
...@@ -390,7 +390,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx, ...@@ -390,7 +390,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
if (err < 0) if (err < 0)
goto err1; goto err1;
err = nft_ct_l3proto_try_module_get(ctx->afi->family); err = nft_ct_netns_get(ctx->net, ctx->afi->family);
if (err < 0) if (err < 0)
goto err1; goto err1;
...@@ -405,7 +405,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx, ...@@ -405,7 +405,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx,
static void nft_ct_get_destroy(const struct nft_ctx *ctx, static void nft_ct_get_destroy(const struct nft_ctx *ctx,
const struct nft_expr *expr) const struct nft_expr *expr)
{ {
nft_ct_l3proto_module_put(ctx->afi->family); nf_ct_netns_put(ctx->net, ctx->afi->family);
} }
static void nft_ct_set_destroy(const struct nft_ctx *ctx, static void nft_ct_set_destroy(const struct nft_ctx *ctx,
...@@ -423,7 +423,7 @@ static void nft_ct_set_destroy(const struct nft_ctx *ctx, ...@@ -423,7 +423,7 @@ static void nft_ct_set_destroy(const struct nft_ctx *ctx,
break; break;
} }
nft_ct_l3proto_module_put(ctx->afi->family); nft_ct_netns_put(ctx->net, ctx->afi->family);
} }
static int nft_ct_get_dump(struct sk_buff *skb, const struct nft_expr *expr) static int nft_ct_get_dump(struct sk_buff *skb, const struct nft_expr *expr)
......
...@@ -86,7 +86,7 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, ...@@ -86,7 +86,7 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
if ((priv->flags & (NFTA_FIB_F_SADDR | NFTA_FIB_F_DADDR)) == 0) if ((priv->flags & (NFTA_FIB_F_SADDR | NFTA_FIB_F_DADDR)) == 0)
return -EINVAL; return -EINVAL;
priv->result = htonl(nla_get_be32(tb[NFTA_FIB_RESULT])); priv->result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT]));
priv->dreg = nft_parse_register(tb[NFTA_FIB_DREG]); priv->dreg = nft_parse_register(tb[NFTA_FIB_DREG]);
switch (priv->result) { switch (priv->result) {
......
...@@ -26,8 +26,8 @@ static void nft_fwd_netdev_eval(const struct nft_expr *expr, ...@@ -26,8 +26,8 @@ static void nft_fwd_netdev_eval(const struct nft_expr *expr,
struct nft_fwd_netdev *priv = nft_expr_priv(expr); struct nft_fwd_netdev *priv = nft_expr_priv(expr);
int oif = regs->data[priv->sreg_dev]; int oif = regs->data[priv->sreg_dev];
nf_dup_netdev_egress(pkt, oif); nf_fwd_netdev_egress(pkt, oif);
regs->verdict.code = NF_DROP; regs->verdict.code = NF_STOLEN;
} }
static const struct nla_policy nft_fwd_netdev_policy[NFTA_FWD_MAX + 1] = { static const struct nla_policy nft_fwd_netdev_policy[NFTA_FWD_MAX + 1] = {
......
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -77,7 +77,7 @@ int nft_masq_init(const struct nft_ctx *ctx, ...@@ -77,7 +77,7 @@ int nft_masq_init(const struct nft_ctx *ctx,
} }
} }
return 0; return nf_ct_netns_get(ctx->net, ctx->afi->family);
} }
EXPORT_SYMBOL_GPL(nft_masq_init); EXPORT_SYMBOL_GPL(nft_masq_init);
...@@ -105,4 +105,4 @@ int nft_masq_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -105,4 +105,4 @@ int nft_masq_dump(struct sk_buff *skb, const struct nft_expr *expr)
EXPORT_SYMBOL_GPL(nft_masq_dump); EXPORT_SYMBOL_GPL(nft_masq_dump);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>");
...@@ -209,7 +209,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr, ...@@ -209,7 +209,7 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
return -EINVAL; return -EINVAL;
} }
return 0; return nf_ct_netns_get(ctx->net, family);
} }
static int nft_nat_dump(struct sk_buff *skb, const struct nft_expr *expr) static int nft_nat_dump(struct sk_buff *skb, const struct nft_expr *expr)
...@@ -257,12 +257,21 @@ static int nft_nat_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -257,12 +257,21 @@ static int nft_nat_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static void
nft_nat_destroy(const struct nft_ctx *ctx, const struct nft_expr *expr)
{
const struct nft_nat *priv = nft_expr_priv(expr);
nf_ct_netns_put(ctx->net, priv->family);
}
static struct nft_expr_type nft_nat_type; static struct nft_expr_type nft_nat_type;
static const struct nft_expr_ops nft_nat_ops = { static const struct nft_expr_ops nft_nat_ops = {
.type = &nft_nat_type, .type = &nft_nat_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_nat)), .size = NFT_EXPR_SIZE(sizeof(struct nft_nat)),
.eval = nft_nat_eval, .eval = nft_nat_eval,
.init = nft_nat_init, .init = nft_nat_init,
.destroy = nft_nat_destroy,
.dump = nft_nat_dump, .dump = nft_nat_dump,
.validate = nft_nat_validate, .validate = nft_nat_validate,
}; };
......
/*
* Copyright (c) 2012-2016 Pablo Neira Ayuso <pablo@netfilter.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
* version 2, as published by the Free Software Foundation.
*/
#include <linux/init.h>
#include <linux/module.h>
#include <linux/skbuff.h>
#include <linux/netlink.h>
#include <linux/netfilter.h>
#include <linux/netfilter/nf_tables.h>
#include <net/netfilter/nf_tables.h>
#define nft_objref_priv(expr) *((struct nft_object **)nft_expr_priv(expr))
static void nft_objref_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
struct nft_object *obj = nft_objref_priv(expr);
obj->type->eval(obj, regs, pkt);
}
static int nft_objref_init(const struct nft_ctx *ctx,
const struct nft_expr *expr,
const struct nlattr * const tb[])
{
struct nft_object *obj = nft_objref_priv(expr);
u8 genmask = nft_genmask_next(ctx->net);
u32 objtype;
if (!tb[NFTA_OBJREF_IMM_NAME] ||
!tb[NFTA_OBJREF_IMM_TYPE])
return -EINVAL;
objtype = ntohl(nla_get_be32(tb[NFTA_OBJREF_IMM_TYPE]));
obj = nf_tables_obj_lookup(ctx->table, tb[NFTA_OBJREF_IMM_NAME], objtype,
genmask);
if (IS_ERR(obj))
return -ENOENT;
nft_objref_priv(expr) = obj;
obj->use++;
return 0;
}
static int nft_objref_dump(struct sk_buff *skb, const struct nft_expr *expr)
{
const struct nft_object *obj = nft_objref_priv(expr);
if (nla_put_string(skb, NFTA_OBJREF_IMM_NAME, obj->name) ||
nla_put_be32(skb, NFTA_OBJREF_IMM_TYPE, htonl(obj->type->type)))
goto nla_put_failure;
return 0;
nla_put_failure:
return -1;
}
static void nft_objref_destroy(const struct nft_ctx *ctx,
const struct nft_expr *expr)
{
struct nft_object *obj = nft_objref_priv(expr);
obj->use--;
}
static struct nft_expr_type nft_objref_type;
static const struct nft_expr_ops nft_objref_ops = {
.type = &nft_objref_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_object *)),
.eval = nft_objref_eval,
.init = nft_objref_init,
.destroy = nft_objref_destroy,
.dump = nft_objref_dump,
};
struct nft_objref_map {
struct nft_set *set;
enum nft_registers sreg:8;
struct nft_set_binding binding;
};
static void nft_objref_map_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
struct nft_objref_map *priv = nft_expr_priv(expr);
const struct nft_set *set = priv->set;
const struct nft_set_ext *ext;
struct nft_object *obj;
bool found;
found = set->ops->lookup(nft_net(pkt), set, &regs->data[priv->sreg],
&ext);
if (!found) {
regs->verdict.code = NFT_BREAK;
return;
}
obj = *nft_set_ext_obj(ext);
obj->type->eval(obj, regs, pkt);
}
static int nft_objref_map_init(const struct nft_ctx *ctx,
const struct nft_expr *expr,
const struct nlattr * const tb[])
{
struct nft_objref_map *priv = nft_expr_priv(expr);
u8 genmask = nft_genmask_next(ctx->net);
struct nft_set *set;
int err;
set = nf_tables_set_lookup(ctx->table, tb[NFTA_OBJREF_SET_NAME], genmask);
if (IS_ERR(set)) {
if (tb[NFTA_OBJREF_SET_ID]) {
set = nf_tables_set_lookup_byid(ctx->net,
tb[NFTA_OBJREF_SET_ID],
genmask);
}
if (IS_ERR(set))
return PTR_ERR(set);
}
if (!(set->flags & NFT_SET_OBJECT))
return -EINVAL;
priv->sreg = nft_parse_register(tb[NFTA_OBJREF_SET_SREG]);
err = nft_validate_register_load(priv->sreg, set->klen);
if (err < 0)
return err;
priv->binding.flags = set->flags & NFT_SET_OBJECT;
err = nf_tables_bind_set(ctx, set, &priv->binding);
if (err < 0)
return err;
priv->set = set;
return 0;
}
static int nft_objref_map_dump(struct sk_buff *skb, const struct nft_expr *expr)
{
const struct nft_objref_map *priv = nft_expr_priv(expr);
if (nft_dump_register(skb, NFTA_OBJREF_SET_SREG, priv->sreg) ||
nla_put_string(skb, NFTA_OBJREF_SET_NAME, priv->set->name))
goto nla_put_failure;
return 0;
nla_put_failure:
return -1;
}
static void nft_objref_map_destroy(const struct nft_ctx *ctx,
const struct nft_expr *expr)
{
struct nft_objref_map *priv = nft_expr_priv(expr);
nf_tables_unbind_set(ctx, priv->set, &priv->binding);
}
static struct nft_expr_type nft_objref_type;
static const struct nft_expr_ops nft_objref_map_ops = {
.type = &nft_objref_type,
.size = NFT_EXPR_SIZE(sizeof(struct nft_objref_map)),
.eval = nft_objref_map_eval,
.init = nft_objref_map_init,
.destroy = nft_objref_map_destroy,
.dump = nft_objref_map_dump,
};
static const struct nft_expr_ops *
nft_objref_select_ops(const struct nft_ctx *ctx,
const struct nlattr * const tb[])
{
if (tb[NFTA_OBJREF_SET_SREG] &&
(tb[NFTA_OBJREF_SET_NAME] ||
tb[NFTA_OBJREF_SET_ID]))
return &nft_objref_map_ops;
else if (tb[NFTA_OBJREF_IMM_NAME] &&
tb[NFTA_OBJREF_IMM_TYPE])
return &nft_objref_ops;
return ERR_PTR(-EOPNOTSUPP);
}
static const struct nla_policy nft_objref_policy[NFTA_OBJREF_MAX + 1] = {
[NFTA_OBJREF_IMM_NAME] = { .type = NLA_STRING },
[NFTA_OBJREF_IMM_TYPE] = { .type = NLA_U32 },
[NFTA_OBJREF_SET_SREG] = { .type = NLA_U32 },
[NFTA_OBJREF_SET_NAME] = { .type = NLA_STRING },
[NFTA_OBJREF_SET_ID] = { .type = NLA_U32 },
};
static struct nft_expr_type nft_objref_type __read_mostly = {
.name = "objref",
.select_ops = nft_objref_select_ops,
.policy = nft_objref_policy,
.maxattr = NFTA_OBJREF_MAX,
.owner = THIS_MODULE,
};
static int __init nft_objref_module_init(void)
{
return nft_register_expr(&nft_objref_type);
}
static void __exit nft_objref_module_exit(void)
{
nft_unregister_expr(&nft_objref_type);
}
module_init(nft_objref_module_init);
module_exit(nft_objref_module_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>");
MODULE_ALIAS_NFT_EXPR("objref");
/* /*
* Copyright (c) 2008-2009 Patrick McHardy <kaber@trash.net> * Copyright (c) 2008-2009 Patrick McHardy <kaber@trash.net>
* Copyright (c) 2016 Pablo Neira Ayuso <pablo@netfilter.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -17,6 +18,10 @@ ...@@ -17,6 +18,10 @@
#include <linux/netfilter/nf_tables.h> #include <linux/netfilter/nf_tables.h>
#include <net/netfilter/nf_tables_core.h> #include <net/netfilter/nf_tables_core.h>
#include <net/netfilter/nf_tables.h> #include <net/netfilter/nf_tables.h>
/* For layer 4 checksum field offset. */
#include <linux/tcp.h>
#include <linux/udp.h>
#include <linux/icmpv6.h>
/* add vlan header into the user buffer for if tag was removed by offloads */ /* add vlan header into the user buffer for if tag was removed by offloads */
static bool static bool
...@@ -164,6 +169,87 @@ const struct nft_expr_ops nft_payload_fast_ops = { ...@@ -164,6 +169,87 @@ const struct nft_expr_ops nft_payload_fast_ops = {
.dump = nft_payload_dump, .dump = nft_payload_dump,
}; };
static inline void nft_csum_replace(__sum16 *sum, __wsum fsum, __wsum tsum)
{
*sum = csum_fold(csum_add(csum_sub(~csum_unfold(*sum), fsum), tsum));
if (*sum == 0)
*sum = CSUM_MANGLED_0;
}
static bool nft_payload_udp_checksum(struct sk_buff *skb, unsigned int thoff)
{
struct udphdr *uh, _uh;
uh = skb_header_pointer(skb, thoff, sizeof(_uh), &_uh);
if (!uh)
return false;
return uh->check;
}
static int nft_payload_l4csum_offset(const struct nft_pktinfo *pkt,
struct sk_buff *skb,
unsigned int *l4csum_offset)
{
switch (pkt->tprot) {
case IPPROTO_TCP:
*l4csum_offset = offsetof(struct tcphdr, check);
break;
case IPPROTO_UDP:
if (!nft_payload_udp_checksum(skb, pkt->xt.thoff))
return -1;
/* Fall through. */
case IPPROTO_UDPLITE:
*l4csum_offset = offsetof(struct udphdr, check);
break;
case IPPROTO_ICMPV6:
*l4csum_offset = offsetof(struct icmp6hdr, icmp6_cksum);
break;
default:
return -1;
}
*l4csum_offset += pkt->xt.thoff;
return 0;
}
static int nft_payload_l4csum_update(const struct nft_pktinfo *pkt,
struct sk_buff *skb,
__wsum fsum, __wsum tsum)
{
int l4csum_offset;
__sum16 sum;
/* If we cannot determine layer 4 checksum offset or this packet doesn't
* require layer 4 checksum recalculation, skip this packet.
*/
if (nft_payload_l4csum_offset(pkt, skb, &l4csum_offset) < 0)
return 0;
if (skb_copy_bits(skb, l4csum_offset, &sum, sizeof(sum)) < 0)
return -1;
/* Checksum mangling for an arbitrary amount of bytes, based on
* inet_proto_csum_replace*() functions.
*/
if (skb->ip_summed != CHECKSUM_PARTIAL) {
nft_csum_replace(&sum, fsum, tsum);
if (skb->ip_summed == CHECKSUM_COMPLETE) {
skb->csum = ~csum_add(csum_sub(~(skb->csum), fsum),
tsum);
}
} else {
sum = ~csum_fold(csum_add(csum_sub(csum_unfold(sum), fsum),
tsum));
}
if (!skb_make_writable(skb, l4csum_offset + sizeof(sum)) ||
skb_store_bits(skb, l4csum_offset, &sum, sizeof(sum)) < 0)
return -1;
return 0;
}
static void nft_payload_set_eval(const struct nft_expr *expr, static void nft_payload_set_eval(const struct nft_expr *expr,
struct nft_regs *regs, struct nft_regs *regs,
const struct nft_pktinfo *pkt) const struct nft_pktinfo *pkt)
...@@ -204,14 +290,15 @@ static void nft_payload_set_eval(const struct nft_expr *expr, ...@@ -204,14 +290,15 @@ static void nft_payload_set_eval(const struct nft_expr *expr,
fsum = skb_checksum(skb, offset, priv->len, 0); fsum = skb_checksum(skb, offset, priv->len, 0);
tsum = csum_partial(src, priv->len, 0); tsum = csum_partial(src, priv->len, 0);
sum = csum_fold(csum_add(csum_sub(~csum_unfold(sum), fsum), nft_csum_replace(&sum, fsum, tsum);
tsum));
if (sum == 0)
sum = CSUM_MANGLED_0;
if (!skb_make_writable(skb, csum_offset + sizeof(sum)) || if (!skb_make_writable(skb, csum_offset + sizeof(sum)) ||
skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0) skb_store_bits(skb, csum_offset, &sum, sizeof(sum)) < 0)
goto err; goto err;
if (priv->csum_flags &&
nft_payload_l4csum_update(pkt, skb, fsum, tsum) < 0)
goto err;
} }
if (!skb_make_writable(skb, max(offset + priv->len, 0)) || if (!skb_make_writable(skb, max(offset + priv->len, 0)) ||
...@@ -240,6 +327,15 @@ static int nft_payload_set_init(const struct nft_ctx *ctx, ...@@ -240,6 +327,15 @@ static int nft_payload_set_init(const struct nft_ctx *ctx,
if (tb[NFTA_PAYLOAD_CSUM_OFFSET]) if (tb[NFTA_PAYLOAD_CSUM_OFFSET])
priv->csum_offset = priv->csum_offset =
ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_OFFSET])); ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_OFFSET]));
if (tb[NFTA_PAYLOAD_CSUM_FLAGS]) {
u32 flags;
flags = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_CSUM_FLAGS]));
if (flags & ~NFT_PAYLOAD_L4CSUM_PSEUDOHDR)
return -EINVAL;
priv->csum_flags = flags;
}
switch (priv->csum_type) { switch (priv->csum_type) {
case NFT_PAYLOAD_CSUM_NONE: case NFT_PAYLOAD_CSUM_NONE:
...@@ -262,7 +358,8 @@ static int nft_payload_set_dump(struct sk_buff *skb, const struct nft_expr *expr ...@@ -262,7 +358,8 @@ static int nft_payload_set_dump(struct sk_buff *skb, const struct nft_expr *expr
nla_put_be32(skb, NFTA_PAYLOAD_LEN, htonl(priv->len)) || nla_put_be32(skb, NFTA_PAYLOAD_LEN, htonl(priv->len)) ||
nla_put_be32(skb, NFTA_PAYLOAD_CSUM_TYPE, htonl(priv->csum_type)) || nla_put_be32(skb, NFTA_PAYLOAD_CSUM_TYPE, htonl(priv->csum_type)) ||
nla_put_be32(skb, NFTA_PAYLOAD_CSUM_OFFSET, nla_put_be32(skb, NFTA_PAYLOAD_CSUM_OFFSET,
htonl(priv->csum_offset))) htonl(priv->csum_offset)) ||
nla_put_be32(skb, NFTA_PAYLOAD_CSUM_FLAGS, htonl(priv->csum_flags)))
goto nla_put_failure; goto nla_put_failure;
return 0; return 0;
......
...@@ -17,38 +17,59 @@ ...@@ -17,38 +17,59 @@
struct nft_quota { struct nft_quota {
u64 quota; u64 quota;
bool invert; unsigned long flags;
atomic64_t remain; atomic64_t consumed;
}; };
static inline bool nft_overquota(struct nft_quota *priv, static inline bool nft_overquota(struct nft_quota *priv,
const struct nft_pktinfo *pkt) const struct sk_buff *skb)
{ {
return atomic64_sub_return(pkt->skb->len, &priv->remain) < 0; return atomic64_add_return(skb->len, &priv->consumed) >= priv->quota;
} }
static void nft_quota_eval(const struct nft_expr *expr, static inline bool nft_quota_invert(struct nft_quota *priv)
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{ {
struct nft_quota *priv = nft_expr_priv(expr); return priv->flags & NFT_QUOTA_F_INV;
}
if (nft_overquota(priv, pkt) ^ priv->invert) static inline void nft_quota_do_eval(struct nft_quota *priv,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
if (nft_overquota(priv, pkt->skb) ^ nft_quota_invert(priv))
regs->verdict.code = NFT_BREAK; regs->verdict.code = NFT_BREAK;
} }
static const struct nla_policy nft_quota_policy[NFTA_QUOTA_MAX + 1] = { static const struct nla_policy nft_quota_policy[NFTA_QUOTA_MAX + 1] = {
[NFTA_QUOTA_BYTES] = { .type = NLA_U64 }, [NFTA_QUOTA_BYTES] = { .type = NLA_U64 },
[NFTA_QUOTA_FLAGS] = { .type = NLA_U32 }, [NFTA_QUOTA_FLAGS] = { .type = NLA_U32 },
[NFTA_QUOTA_CONSUMED] = { .type = NLA_U64 },
}; };
static int nft_quota_init(const struct nft_ctx *ctx, #define NFT_QUOTA_DEPLETED_BIT 1 /* From NFT_QUOTA_F_DEPLETED. */
const struct nft_expr *expr,
const struct nlattr * const tb[]) static void nft_quota_obj_eval(struct nft_object *obj,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{ {
struct nft_quota *priv = nft_expr_priv(expr); struct nft_quota *priv = nft_obj_data(obj);
u32 flags = 0; bool overquota;
u64 quota;
overquota = nft_overquota(priv, pkt->skb);
if (overquota ^ nft_quota_invert(priv))
regs->verdict.code = NFT_BREAK;
if (overquota &&
!test_and_set_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
nft_obj_notify(nft_net(pkt), obj->table, obj, 0, 0,
NFT_MSG_NEWOBJ, nft_pf(pkt), 0, GFP_ATOMIC);
}
static int nft_quota_do_init(const struct nlattr * const tb[],
struct nft_quota *priv)
{
unsigned long flags = 0;
u64 quota, consumed = 0;
if (!tb[NFTA_QUOTA_BYTES]) if (!tb[NFTA_QUOTA_BYTES])
return -EINVAL; return -EINVAL;
...@@ -57,26 +78,60 @@ static int nft_quota_init(const struct nft_ctx *ctx, ...@@ -57,26 +78,60 @@ static int nft_quota_init(const struct nft_ctx *ctx,
if (quota > S64_MAX) if (quota > S64_MAX)
return -EOVERFLOW; return -EOVERFLOW;
if (tb[NFTA_QUOTA_CONSUMED]) {
consumed = be64_to_cpu(nla_get_be64(tb[NFTA_QUOTA_CONSUMED]));
if (consumed > quota)
return -EINVAL;
}
if (tb[NFTA_QUOTA_FLAGS]) { if (tb[NFTA_QUOTA_FLAGS]) {
flags = ntohl(nla_get_be32(tb[NFTA_QUOTA_FLAGS])); flags = ntohl(nla_get_be32(tb[NFTA_QUOTA_FLAGS]));
if (flags & ~NFT_QUOTA_F_INV) if (flags & ~NFT_QUOTA_F_INV)
return -EINVAL; return -EINVAL;
if (flags & NFT_QUOTA_F_DEPLETED)
return -EOPNOTSUPP;
} }
priv->quota = quota; priv->quota = quota;
priv->invert = (flags & NFT_QUOTA_F_INV) ? true : false; priv->flags = flags;
atomic64_set(&priv->remain, quota); atomic64_set(&priv->consumed, consumed);
return 0; return 0;
} }
static int nft_quota_dump(struct sk_buff *skb, const struct nft_expr *expr) static int nft_quota_obj_init(const struct nlattr * const tb[],
struct nft_object *obj)
{
struct nft_quota *priv = nft_obj_data(obj);
return nft_quota_do_init(tb, priv);
}
static int nft_quota_do_dump(struct sk_buff *skb, struct nft_quota *priv,
bool reset)
{ {
const struct nft_quota *priv = nft_expr_priv(expr); u32 flags = priv->flags;
u32 flags = priv->invert ? NFT_QUOTA_F_INV : 0; u64 consumed;
if (reset) {
consumed = atomic64_xchg(&priv->consumed, 0);
if (test_and_clear_bit(NFT_QUOTA_DEPLETED_BIT, &priv->flags))
flags |= NFT_QUOTA_F_DEPLETED;
} else {
consumed = atomic64_read(&priv->consumed);
}
/* Since we inconditionally increment consumed quota for each packet
* that we see, don't go over the quota boundary in what we send to
* userspace.
*/
if (consumed > priv->quota)
consumed = priv->quota;
if (nla_put_be64(skb, NFTA_QUOTA_BYTES, cpu_to_be64(priv->quota), if (nla_put_be64(skb, NFTA_QUOTA_BYTES, cpu_to_be64(priv->quota),
NFTA_QUOTA_PAD) || NFTA_QUOTA_PAD) ||
nla_put_be64(skb, NFTA_QUOTA_CONSUMED, cpu_to_be64(consumed),
NFTA_QUOTA_PAD) ||
nla_put_be32(skb, NFTA_QUOTA_FLAGS, htonl(flags))) nla_put_be32(skb, NFTA_QUOTA_FLAGS, htonl(flags)))
goto nla_put_failure; goto nla_put_failure;
return 0; return 0;
...@@ -85,6 +140,50 @@ static int nft_quota_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -85,6 +140,50 @@ static int nft_quota_dump(struct sk_buff *skb, const struct nft_expr *expr)
return -1; return -1;
} }
static int nft_quota_obj_dump(struct sk_buff *skb, struct nft_object *obj,
bool reset)
{
struct nft_quota *priv = nft_obj_data(obj);
return nft_quota_do_dump(skb, priv, reset);
}
static struct nft_object_type nft_quota_obj __read_mostly = {
.type = NFT_OBJECT_QUOTA,
.size = sizeof(struct nft_quota),
.maxattr = NFTA_QUOTA_MAX,
.policy = nft_quota_policy,
.init = nft_quota_obj_init,
.eval = nft_quota_obj_eval,
.dump = nft_quota_obj_dump,
.owner = THIS_MODULE,
};
static void nft_quota_eval(const struct nft_expr *expr,
struct nft_regs *regs,
const struct nft_pktinfo *pkt)
{
struct nft_quota *priv = nft_expr_priv(expr);
nft_quota_do_eval(priv, regs, pkt);
}
static int nft_quota_init(const struct nft_ctx *ctx,
const struct nft_expr *expr,
const struct nlattr * const tb[])
{
struct nft_quota *priv = nft_expr_priv(expr);
return nft_quota_do_init(tb, priv);
}
static int nft_quota_dump(struct sk_buff *skb, const struct nft_expr *expr)
{
struct nft_quota *priv = nft_expr_priv(expr);
return nft_quota_do_dump(skb, priv, false);
}
static struct nft_expr_type nft_quota_type; static struct nft_expr_type nft_quota_type;
static const struct nft_expr_ops nft_quota_ops = { static const struct nft_expr_ops nft_quota_ops = {
.type = &nft_quota_type, .type = &nft_quota_type,
...@@ -105,12 +204,26 @@ static struct nft_expr_type nft_quota_type __read_mostly = { ...@@ -105,12 +204,26 @@ static struct nft_expr_type nft_quota_type __read_mostly = {
static int __init nft_quota_module_init(void) static int __init nft_quota_module_init(void)
{ {
return nft_register_expr(&nft_quota_type); int err;
err = nft_register_obj(&nft_quota_obj);
if (err < 0)
return err;
err = nft_register_expr(&nft_quota_type);
if (err < 0)
goto err1;
return 0;
err1:
nft_unregister_obj(&nft_quota_obj);
return err;
} }
static void __exit nft_quota_module_exit(void) static void __exit nft_quota_module_exit(void)
{ {
nft_unregister_expr(&nft_quota_type); nft_unregister_expr(&nft_quota_type);
nft_unregister_obj(&nft_quota_obj);
} }
module_init(nft_quota_module_init); module_init(nft_quota_module_init);
...@@ -119,3 +232,4 @@ module_exit(nft_quota_module_exit); ...@@ -119,3 +232,4 @@ module_exit(nft_quota_module_exit);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>"); MODULE_AUTHOR("Pablo Neira Ayuso <pablo@netfilter.org>");
MODULE_ALIAS_NFT_EXPR("quota"); MODULE_ALIAS_NFT_EXPR("quota");
MODULE_ALIAS_NFT_OBJ(NFT_OBJECT_QUOTA);
/* /*
* Copyright (c) 2014 Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com> * Copyright (c) 2014 Arturo Borrero Gonzalez <arturo@debian.org>
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as * it under the terms of the GNU General Public License version 2 as
...@@ -79,7 +79,7 @@ int nft_redir_init(const struct nft_ctx *ctx, ...@@ -79,7 +79,7 @@ int nft_redir_init(const struct nft_ctx *ctx,
return -EINVAL; return -EINVAL;
} }
return 0; return nf_ct_netns_get(ctx->net, ctx->afi->family);
} }
EXPORT_SYMBOL_GPL(nft_redir_init); EXPORT_SYMBOL_GPL(nft_redir_init);
...@@ -108,4 +108,4 @@ int nft_redir_dump(struct sk_buff *skb, const struct nft_expr *expr) ...@@ -108,4 +108,4 @@ int nft_redir_dump(struct sk_buff *skb, const struct nft_expr *expr)
EXPORT_SYMBOL_GPL(nft_redir_dump); EXPORT_SYMBOL_GPL(nft_redir_dump);
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>"); MODULE_AUTHOR("Arturo Borrero Gonzalez <arturo@debian.org>");
...@@ -167,6 +167,19 @@ static void nft_hash_activate(const struct net *net, const struct nft_set *set, ...@@ -167,6 +167,19 @@ static void nft_hash_activate(const struct net *net, const struct nft_set *set,
nft_set_elem_clear_busy(&he->ext); nft_set_elem_clear_busy(&he->ext);
} }
static bool nft_hash_deactivate_one(const struct net *net,
const struct nft_set *set, void *priv)
{
struct nft_hash_elem *he = priv;
if (!nft_set_elem_mark_busy(&he->ext) ||
!nft_is_active(net, &he->ext)) {
nft_set_elem_change_active(net, set, &he->ext);
return true;
}
return false;
}
static void *nft_hash_deactivate(const struct net *net, static void *nft_hash_deactivate(const struct net *net,
const struct nft_set *set, const struct nft_set *set,
const struct nft_set_elem *elem) const struct nft_set_elem *elem)
...@@ -181,13 +194,10 @@ static void *nft_hash_deactivate(const struct net *net, ...@@ -181,13 +194,10 @@ static void *nft_hash_deactivate(const struct net *net,
rcu_read_lock(); rcu_read_lock();
he = rhashtable_lookup_fast(&priv->ht, &arg, nft_hash_params); he = rhashtable_lookup_fast(&priv->ht, &arg, nft_hash_params);
if (he != NULL) { if (he != NULL &&
if (!nft_set_elem_mark_busy(&he->ext) || !nft_hash_deactivate_one(net, set, he))
!nft_is_active(net, &he->ext)) he = NULL;
nft_set_elem_change_active(net, set, &he->ext);
else
he = NULL;
}
rcu_read_unlock(); rcu_read_unlock();
return he; return he;
...@@ -387,6 +397,7 @@ static struct nft_set_ops nft_hash_ops __read_mostly = { ...@@ -387,6 +397,7 @@ static struct nft_set_ops nft_hash_ops __read_mostly = {
.insert = nft_hash_insert, .insert = nft_hash_insert,
.activate = nft_hash_activate, .activate = nft_hash_activate,
.deactivate = nft_hash_deactivate, .deactivate = nft_hash_deactivate,
.deactivate_one = nft_hash_deactivate_one,
.remove = nft_hash_remove, .remove = nft_hash_remove,
.lookup = nft_hash_lookup, .lookup = nft_hash_lookup,
.update = nft_hash_update, .update = nft_hash_update,
......
...@@ -171,6 +171,15 @@ static void nft_rbtree_activate(const struct net *net, ...@@ -171,6 +171,15 @@ static void nft_rbtree_activate(const struct net *net,
nft_set_elem_change_active(net, set, &rbe->ext); nft_set_elem_change_active(net, set, &rbe->ext);
} }
static bool nft_rbtree_deactivate_one(const struct net *net,
const struct nft_set *set, void *priv)
{
struct nft_rbtree_elem *rbe = priv;
nft_set_elem_change_active(net, set, &rbe->ext);
return true;
}
static void *nft_rbtree_deactivate(const struct net *net, static void *nft_rbtree_deactivate(const struct net *net,
const struct nft_set *set, const struct nft_set *set,
const struct nft_set_elem *elem) const struct nft_set_elem *elem)
...@@ -204,7 +213,7 @@ static void *nft_rbtree_deactivate(const struct net *net, ...@@ -204,7 +213,7 @@ static void *nft_rbtree_deactivate(const struct net *net,
parent = parent->rb_right; parent = parent->rb_right;
continue; continue;
} }
nft_set_elem_change_active(net, set, &rbe->ext); nft_rbtree_deactivate_one(net, set, rbe);
return rbe; return rbe;
} }
} }
...@@ -295,6 +304,7 @@ static struct nft_set_ops nft_rbtree_ops __read_mostly = { ...@@ -295,6 +304,7 @@ static struct nft_set_ops nft_rbtree_ops __read_mostly = {
.insert = nft_rbtree_insert, .insert = nft_rbtree_insert,
.remove = nft_rbtree_remove, .remove = nft_rbtree_remove,
.deactivate = nft_rbtree_deactivate, .deactivate = nft_rbtree_deactivate,
.deactivate_one = nft_rbtree_deactivate_one,
.activate = nft_rbtree_activate, .activate = nft_rbtree_activate,
.lookup = nft_rbtree_lookup, .lookup = nft_rbtree_lookup,
.walk = nft_rbtree_walk, .walk = nft_rbtree_walk,
......
...@@ -40,6 +40,7 @@ MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>"); ...@@ -40,6 +40,7 @@ MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module"); MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module");
#define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1)) #define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1))
#define XT_PCPU_BLOCK_SIZE 4096
struct compat_delta { struct compat_delta {
unsigned int offset; /* offset in kernel */ unsigned int offset; /* offset in kernel */
...@@ -958,7 +959,9 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size) ...@@ -958,7 +959,9 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
if (sz <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER)) if (sz <= (PAGE_SIZE << PAGE_ALLOC_COSTLY_ORDER))
info = kmalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); info = kmalloc(sz, GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY);
if (!info) { if (!info) {
info = vmalloc(sz); info = __vmalloc(sz, GFP_KERNEL | __GFP_NOWARN |
__GFP_NORETRY | __GFP_HIGHMEM,
PAGE_KERNEL);
if (!info) if (!info)
return NULL; return NULL;
} }
...@@ -1615,6 +1618,59 @@ void xt_proto_fini(struct net *net, u_int8_t af) ...@@ -1615,6 +1618,59 @@ void xt_proto_fini(struct net *net, u_int8_t af)
} }
EXPORT_SYMBOL_GPL(xt_proto_fini); EXPORT_SYMBOL_GPL(xt_proto_fini);
/**
* xt_percpu_counter_alloc - allocate x_tables rule counter
*
* @state: pointer to xt_percpu allocation state
* @counter: pointer to counter struct inside the ip(6)/arpt_entry struct
*
* On SMP, the packet counter [ ip(6)t_entry->counters.pcnt ] will then
* contain the address of the real (percpu) counter.
*
* Rule evaluation needs to use xt_get_this_cpu_counter() helper
* to fetch the real percpu counter.
*
* To speed up allocation and improve data locality, a 4kb block is
* allocated.
*
* xt_percpu_counter_alloc_state contains the base address of the
* allocated page and the current sub-offset.
*
* returns false on error.
*/
bool xt_percpu_counter_alloc(struct xt_percpu_counter_alloc_state *state,
struct xt_counters *counter)
{
BUILD_BUG_ON(XT_PCPU_BLOCK_SIZE < (sizeof(*counter) * 2));
if (nr_cpu_ids <= 1)
return true;
if (!state->mem) {
state->mem = __alloc_percpu(XT_PCPU_BLOCK_SIZE,
XT_PCPU_BLOCK_SIZE);
if (!state->mem)
return false;
}
counter->pcnt = (__force unsigned long)(state->mem + state->off);
state->off += sizeof(*counter);
if (state->off > (XT_PCPU_BLOCK_SIZE - sizeof(*counter))) {
state->mem = NULL;
state->off = 0;
}
return true;
}
EXPORT_SYMBOL_GPL(xt_percpu_counter_alloc);
void xt_percpu_counter_free(struct xt_counters *counters)
{
unsigned long pcnt = counters->pcnt;
if (nr_cpu_ids > 1 && (pcnt & (XT_PCPU_BLOCK_SIZE - 1)) == 0)
free_percpu((void __percpu *)pcnt);
}
EXPORT_SYMBOL_GPL(xt_percpu_counter_free);
static int __net_init xt_net_init(struct net *net) static int __net_init xt_net_init(struct net *net)
{ {
int i; int i;
......
...@@ -106,7 +106,7 @@ static int connsecmark_tg_check(const struct xt_tgchk_param *par) ...@@ -106,7 +106,7 @@ static int connsecmark_tg_check(const struct xt_tgchk_param *par)
return -EINVAL; return -EINVAL;
} }
ret = nf_ct_l3proto_try_module_get(par->family); ret = nf_ct_netns_get(par->net, par->family);
if (ret < 0) if (ret < 0)
pr_info("cannot load conntrack support for proto=%u\n", pr_info("cannot load conntrack support for proto=%u\n",
par->family); par->family);
...@@ -115,7 +115,7 @@ static int connsecmark_tg_check(const struct xt_tgchk_param *par) ...@@ -115,7 +115,7 @@ static int connsecmark_tg_check(const struct xt_tgchk_param *par)
static void connsecmark_tg_destroy(const struct xt_tgdtor_param *par) static void connsecmark_tg_destroy(const struct xt_tgdtor_param *par)
{ {
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
} }
static struct xt_target connsecmark_tg_reg __read_mostly = { static struct xt_target connsecmark_tg_reg __read_mostly = {
......
...@@ -216,7 +216,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par, ...@@ -216,7 +216,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par,
goto err1; goto err1;
#endif #endif
ret = nf_ct_l3proto_try_module_get(par->family); ret = nf_ct_netns_get(par->net, par->family);
if (ret < 0) if (ret < 0)
goto err1; goto err1;
...@@ -260,7 +260,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par, ...@@ -260,7 +260,7 @@ static int xt_ct_tg_check(const struct xt_tgchk_param *par,
err3: err3:
nf_ct_tmpl_free(ct); nf_ct_tmpl_free(ct);
err2: err2:
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
err1: err1:
return ret; return ret;
} }
...@@ -341,7 +341,7 @@ static void xt_ct_tg_destroy(const struct xt_tgdtor_param *par, ...@@ -341,7 +341,7 @@ static void xt_ct_tg_destroy(const struct xt_tgdtor_param *par,
if (help) if (help)
module_put(help->helper->me); module_put(help->helper->me);
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
xt_ct_destroy_timeout(ct); xt_ct_destroy_timeout(ct);
nf_ct_put(info->ct); nf_ct_put(info->ct);
......
...@@ -60,7 +60,12 @@ static int netmap_tg6_checkentry(const struct xt_tgchk_param *par) ...@@ -60,7 +60,12 @@ static int netmap_tg6_checkentry(const struct xt_tgchk_param *par)
if (!(range->flags & NF_NAT_RANGE_MAP_IPS)) if (!(range->flags & NF_NAT_RANGE_MAP_IPS))
return -EINVAL; return -EINVAL;
return 0; return nf_ct_netns_get(par->net, par->family);
}
static void netmap_tg_destroy(const struct xt_tgdtor_param *par)
{
nf_ct_netns_put(par->net, par->family);
} }
static unsigned int static unsigned int
...@@ -111,7 +116,7 @@ static int netmap_tg4_check(const struct xt_tgchk_param *par) ...@@ -111,7 +116,7 @@ static int netmap_tg4_check(const struct xt_tgchk_param *par)
pr_debug("bad rangesize %u.\n", mr->rangesize); pr_debug("bad rangesize %u.\n", mr->rangesize);
return -EINVAL; return -EINVAL;
} }
return 0; return nf_ct_netns_get(par->net, par->family);
} }
static struct xt_target netmap_tg_reg[] __read_mostly = { static struct xt_target netmap_tg_reg[] __read_mostly = {
...@@ -127,6 +132,7 @@ static struct xt_target netmap_tg_reg[] __read_mostly = { ...@@ -127,6 +132,7 @@ static struct xt_target netmap_tg_reg[] __read_mostly = {
(1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_LOCAL_OUT) |
(1 << NF_INET_LOCAL_IN), (1 << NF_INET_LOCAL_IN),
.checkentry = netmap_tg6_checkentry, .checkentry = netmap_tg6_checkentry,
.destroy = netmap_tg_destroy,
.me = THIS_MODULE, .me = THIS_MODULE,
}, },
{ {
...@@ -141,6 +147,7 @@ static struct xt_target netmap_tg_reg[] __read_mostly = { ...@@ -141,6 +147,7 @@ static struct xt_target netmap_tg_reg[] __read_mostly = {
(1 << NF_INET_LOCAL_OUT) | (1 << NF_INET_LOCAL_OUT) |
(1 << NF_INET_LOCAL_IN), (1 << NF_INET_LOCAL_IN),
.checkentry = netmap_tg4_check, .checkentry = netmap_tg4_check,
.destroy = netmap_tg_destroy,
.me = THIS_MODULE, .me = THIS_MODULE,
}, },
}; };
......
...@@ -40,7 +40,13 @@ static int redirect_tg6_checkentry(const struct xt_tgchk_param *par) ...@@ -40,7 +40,13 @@ static int redirect_tg6_checkentry(const struct xt_tgchk_param *par)
if (range->flags & NF_NAT_RANGE_MAP_IPS) if (range->flags & NF_NAT_RANGE_MAP_IPS)
return -EINVAL; return -EINVAL;
return 0;
return nf_ct_netns_get(par->net, par->family);
}
static void redirect_tg_destroy(const struct xt_tgdtor_param *par)
{
nf_ct_netns_put(par->net, par->family);
} }
/* FIXME: Take multiple ranges --RR */ /* FIXME: Take multiple ranges --RR */
...@@ -56,7 +62,7 @@ static int redirect_tg4_check(const struct xt_tgchk_param *par) ...@@ -56,7 +62,7 @@ static int redirect_tg4_check(const struct xt_tgchk_param *par)
pr_debug("bad rangesize %u.\n", mr->rangesize); pr_debug("bad rangesize %u.\n", mr->rangesize);
return -EINVAL; return -EINVAL;
} }
return 0; return nf_ct_netns_get(par->net, par->family);
} }
static unsigned int static unsigned int
...@@ -72,6 +78,7 @@ static struct xt_target redirect_tg_reg[] __read_mostly = { ...@@ -72,6 +78,7 @@ static struct xt_target redirect_tg_reg[] __read_mostly = {
.revision = 0, .revision = 0,
.table = "nat", .table = "nat",
.checkentry = redirect_tg6_checkentry, .checkentry = redirect_tg6_checkentry,
.destroy = redirect_tg_destroy,
.target = redirect_tg6, .target = redirect_tg6,
.targetsize = sizeof(struct nf_nat_range), .targetsize = sizeof(struct nf_nat_range),
.hooks = (1 << NF_INET_PRE_ROUTING) | .hooks = (1 << NF_INET_PRE_ROUTING) |
...@@ -85,6 +92,7 @@ static struct xt_target redirect_tg_reg[] __read_mostly = { ...@@ -85,6 +92,7 @@ static struct xt_target redirect_tg_reg[] __read_mostly = {
.table = "nat", .table = "nat",
.target = redirect_tg4, .target = redirect_tg4,
.checkentry = redirect_tg4_check, .checkentry = redirect_tg4_check,
.destroy = redirect_tg_destroy,
.targetsize = sizeof(struct nf_nat_ipv4_multi_range_compat), .targetsize = sizeof(struct nf_nat_ipv4_multi_range_compat),
.hooks = (1 << NF_INET_PRE_ROUTING) | .hooks = (1 << NF_INET_PRE_ROUTING) |
(1 << NF_INET_LOCAL_OUT), (1 << NF_INET_LOCAL_OUT),
......
...@@ -531,6 +531,11 @@ tproxy_tg6_v1(struct sk_buff *skb, const struct xt_action_param *par) ...@@ -531,6 +531,11 @@ tproxy_tg6_v1(struct sk_buff *skb, const struct xt_action_param *par)
static int tproxy_tg6_check(const struct xt_tgchk_param *par) static int tproxy_tg6_check(const struct xt_tgchk_param *par)
{ {
const struct ip6t_ip6 *i = par->entryinfo; const struct ip6t_ip6 *i = par->entryinfo;
int err;
err = nf_defrag_ipv6_enable(par->net);
if (err)
return err;
if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) && if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) &&
!(i->invflags & IP6T_INV_PROTO)) !(i->invflags & IP6T_INV_PROTO))
...@@ -545,6 +550,11 @@ static int tproxy_tg6_check(const struct xt_tgchk_param *par) ...@@ -545,6 +550,11 @@ static int tproxy_tg6_check(const struct xt_tgchk_param *par)
static int tproxy_tg4_check(const struct xt_tgchk_param *par) static int tproxy_tg4_check(const struct xt_tgchk_param *par)
{ {
const struct ipt_ip *i = par->entryinfo; const struct ipt_ip *i = par->entryinfo;
int err;
err = nf_defrag_ipv4_enable(par->net);
if (err)
return err;
if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP) if ((i->proto == IPPROTO_TCP || i->proto == IPPROTO_UDP)
&& !(i->invflags & IPT_INV_PROTO)) && !(i->invflags & IPT_INV_PROTO))
...@@ -596,11 +606,6 @@ static struct xt_target tproxy_tg_reg[] __read_mostly = { ...@@ -596,11 +606,6 @@ static struct xt_target tproxy_tg_reg[] __read_mostly = {
static int __init tproxy_tg_init(void) static int __init tproxy_tg_init(void)
{ {
nf_defrag_ipv4_enable();
#ifdef XT_TPROXY_HAVE_IPV6
nf_defrag_ipv6_enable();
#endif
return xt_register_targets(tproxy_tg_reg, ARRAY_SIZE(tproxy_tg_reg)); return xt_register_targets(tproxy_tg_reg, ARRAY_SIZE(tproxy_tg_reg));
} }
......
...@@ -10,6 +10,7 @@ ...@@ -10,6 +10,7 @@
#include <linux/module.h> #include <linux/module.h>
#include <linux/skbuff.h> #include <linux/skbuff.h>
#include <linux/filter.h> #include <linux/filter.h>
#include <linux/bpf.h>
#include <linux/netfilter/xt_bpf.h> #include <linux/netfilter/xt_bpf.h>
#include <linux/netfilter/x_tables.h> #include <linux/netfilter/x_tables.h>
...@@ -20,15 +21,15 @@ MODULE_LICENSE("GPL"); ...@@ -20,15 +21,15 @@ MODULE_LICENSE("GPL");
MODULE_ALIAS("ipt_bpf"); MODULE_ALIAS("ipt_bpf");
MODULE_ALIAS("ip6t_bpf"); MODULE_ALIAS("ip6t_bpf");
static int bpf_mt_check(const struct xt_mtchk_param *par) static int __bpf_mt_check_bytecode(struct sock_filter *insns, __u16 len,
struct bpf_prog **ret)
{ {
struct xt_bpf_info *info = par->matchinfo;
struct sock_fprog_kern program; struct sock_fprog_kern program;
program.len = info->bpf_program_num_elem; program.len = len;
program.filter = info->bpf_program; program.filter = insns;
if (bpf_prog_create(&info->filter, &program)) { if (bpf_prog_create(ret, &program)) {
pr_info("bpf: check failed: parse error\n"); pr_info("bpf: check failed: parse error\n");
return -EINVAL; return -EINVAL;
} }
...@@ -36,6 +37,42 @@ static int bpf_mt_check(const struct xt_mtchk_param *par) ...@@ -36,6 +37,42 @@ static int bpf_mt_check(const struct xt_mtchk_param *par)
return 0; return 0;
} }
static int __bpf_mt_check_fd(int fd, struct bpf_prog **ret)
{
struct bpf_prog *prog;
prog = bpf_prog_get_type(fd, BPF_PROG_TYPE_SOCKET_FILTER);
if (IS_ERR(prog))
return PTR_ERR(prog);
*ret = prog;
return 0;
}
static int bpf_mt_check(const struct xt_mtchk_param *par)
{
struct xt_bpf_info *info = par->matchinfo;
return __bpf_mt_check_bytecode(info->bpf_program,
info->bpf_program_num_elem,
&info->filter);
}
static int bpf_mt_check_v1(const struct xt_mtchk_param *par)
{
struct xt_bpf_info_v1 *info = par->matchinfo;
if (info->mode == XT_BPF_MODE_BYTECODE)
return __bpf_mt_check_bytecode(info->bpf_program,
info->bpf_program_num_elem,
&info->filter);
else if (info->mode == XT_BPF_MODE_FD_PINNED ||
info->mode == XT_BPF_MODE_FD_ELF)
return __bpf_mt_check_fd(info->fd, &info->filter);
else
return -EINVAL;
}
static bool bpf_mt(const struct sk_buff *skb, struct xt_action_param *par) static bool bpf_mt(const struct sk_buff *skb, struct xt_action_param *par)
{ {
const struct xt_bpf_info *info = par->matchinfo; const struct xt_bpf_info *info = par->matchinfo;
...@@ -43,31 +80,58 @@ static bool bpf_mt(const struct sk_buff *skb, struct xt_action_param *par) ...@@ -43,31 +80,58 @@ static bool bpf_mt(const struct sk_buff *skb, struct xt_action_param *par)
return BPF_PROG_RUN(info->filter, skb); return BPF_PROG_RUN(info->filter, skb);
} }
static bool bpf_mt_v1(const struct sk_buff *skb, struct xt_action_param *par)
{
const struct xt_bpf_info_v1 *info = par->matchinfo;
return !!bpf_prog_run_save_cb(info->filter, (struct sk_buff *) skb);
}
static void bpf_mt_destroy(const struct xt_mtdtor_param *par) static void bpf_mt_destroy(const struct xt_mtdtor_param *par)
{ {
const struct xt_bpf_info *info = par->matchinfo; const struct xt_bpf_info *info = par->matchinfo;
bpf_prog_destroy(info->filter);
}
static void bpf_mt_destroy_v1(const struct xt_mtdtor_param *par)
{
const struct xt_bpf_info_v1 *info = par->matchinfo;
bpf_prog_destroy(info->filter); bpf_prog_destroy(info->filter);
} }
static struct xt_match bpf_mt_reg __read_mostly = { static struct xt_match bpf_mt_reg[] __read_mostly = {
.name = "bpf", {
.revision = 0, .name = "bpf",
.family = NFPROTO_UNSPEC, .revision = 0,
.checkentry = bpf_mt_check, .family = NFPROTO_UNSPEC,
.match = bpf_mt, .checkentry = bpf_mt_check,
.destroy = bpf_mt_destroy, .match = bpf_mt,
.matchsize = sizeof(struct xt_bpf_info), .destroy = bpf_mt_destroy,
.me = THIS_MODULE, .matchsize = sizeof(struct xt_bpf_info),
.me = THIS_MODULE,
},
{
.name = "bpf",
.revision = 1,
.family = NFPROTO_UNSPEC,
.checkentry = bpf_mt_check_v1,
.match = bpf_mt_v1,
.destroy = bpf_mt_destroy_v1,
.matchsize = sizeof(struct xt_bpf_info_v1),
.me = THIS_MODULE,
},
}; };
static int __init bpf_mt_init(void) static int __init bpf_mt_init(void)
{ {
return xt_register_match(&bpf_mt_reg); return xt_register_matches(bpf_mt_reg, ARRAY_SIZE(bpf_mt_reg));
} }
static void __exit bpf_mt_exit(void) static void __exit bpf_mt_exit(void)
{ {
xt_unregister_match(&bpf_mt_reg); xt_unregister_matches(bpf_mt_reg, ARRAY_SIZE(bpf_mt_reg));
} }
module_init(bpf_mt_init); module_init(bpf_mt_init);
......
...@@ -110,7 +110,7 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par) ...@@ -110,7 +110,7 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par)
sinfo->direction != XT_CONNBYTES_DIR_BOTH) sinfo->direction != XT_CONNBYTES_DIR_BOTH)
return -EINVAL; return -EINVAL;
ret = nf_ct_l3proto_try_module_get(par->family); ret = nf_ct_netns_get(par->net, par->family);
if (ret < 0) if (ret < 0)
pr_info("cannot load conntrack support for proto=%u\n", pr_info("cannot load conntrack support for proto=%u\n",
par->family); par->family);
...@@ -129,7 +129,7 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par) ...@@ -129,7 +129,7 @@ static int connbytes_mt_check(const struct xt_mtchk_param *par)
static void connbytes_mt_destroy(const struct xt_mtdtor_param *par) static void connbytes_mt_destroy(const struct xt_mtdtor_param *par)
{ {
nf_ct_l3proto_module_put(par->family); nf_ct_netns_put(par->net, par->family);
} }
static struct xt_match connbytes_mt_reg __read_mostly = { static struct xt_match connbytes_mt_reg __read_mostly = {
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment