Commit c139cd3b authored by David S. Miller's avatar David S. Miller

Merge branch 'ip_forward_pmtu'

Hannes Frederic Sowa says:

====================
path mtu hardening patches

After a lot of back and forth I want to propose these changes regarding
path mtu hardening and give an outline why I think this is the best way
how to proceed:

This set contains the following patches:
* ipv4: introduce ip_dst_mtu_maybe_forward and protect forwarding path against pmtu spoofing
* ipv6: introduce ip6_dst_mtu_forward and protect forwarding path with it
* ipv4: introduce hardened ip_no_pmtu_disc mode

The first one switches the forwarding path of IPv4 to use the interface
mtu by default and ignore a possible discovered path mtu. It provides
a sysctl to switch back to the original behavior (see discussion below).

The second patch does the same thing unconditionally for IPv6. I don't
provide a knob for IPv6 to switch to original behavior (please see
below).

The third patch introduces a hardened pmtu mode, where only pmtu
information are accepted where the protocol is able to do more stringent
checks on the icmp piggyback payload (please see the patch commit msg
for further details).

Why is this change necessary?

First of all, RFC 1191 4. Router specification says:
"When a router is unable to forward a datagram because it exceeds the
 MTU of the next-hop network and its Don't Fragment bit is set, the
 router is required to return an ICMP Destination Unreachable message
 to the source of the datagram, with the Code indicating
 "fragmentation needed and DF set". ..."

For some time now fragmentation has been considered problematic, e.g.:
* http://www.hpl.hp.com/techreports/Compaq-DEC/WRL-87-3.pdf
* http://tools.ietf.org/search/rfc4963

Most of them seem to agree that fragmentation should be avoided because
of efficiency, data corruption or security concerns.

Recently it was shown possible that correctly guessing IP ids could lead
to data injection on DNS packets:
<https://sites.google.com/site/hayashulman/files/fragmentation-poisoning.pdf>

While we can try to completly stop fragmentation on the end host
(this is e.g. implemented via IP_PMTUDISC_INTERFACE), we cannot stop
fragmentation completly on the forwarding path. On the end host the
application has to deal with MTUs and has to choose fallback methods
if fragmentation could be an attack vector. This is already the case for
most DNS software, where a maximum UDP packet size can be configured. But
until recently they had no control over local fragmentation and could
thus emit fragmented packets.

On the forwarding path we can just try to delay the fragmentation to
the last hop where this is really necessary. Current kernel already does
that but only because routers don't receive feedback of path mtus, these are
only send back to the end host system. But it is possible to maliciously
insert path mtu inforamtion via ICMP packets which have an icmp echo_reply
payload, because we cannot validate those notifications against local
sockets. DHCP clients which establish an any-bound RAW-socket could also
start processing unwanted fragmentation-needed packets.

Why does IPv4 has a knob to revert to old behavior while IPv6 doesn't?

IPv4 does fragmentation on the path while IPv6 does always respond with
packet-too-big errors. The interface MTU will always be greater than
the path MTU information. So we would discard packets we could actually
forward because of malicious information. After this change we would
let the hop, which really could not forward the packet, notify the host
of this problem.

IPv4 allowes fragmentation mid-path. In case someone does use a software
which tries to discover such paths and assumes that the kernel is handling
the discovered pmtu information automatically. This should be an extremly
rare case, but because I could not exclude the possibility this knob is
provided. Also this software could insert non-locked mtu information
into the kernel. We cannot distinguish that from path mtu information
currently. Premature fragmentation could solve some problems in wrongly
configured networks, thus this switch is provided.

One frag-needed packet could reduce the path mtu down to 522 bytes
(route/min_pmtu).

Misc:

IPv6 neighbor discovery could advertise mtu information for an
interface. These information update the ipv6-specific interface mtu and
thus get used by the forwarding path.

Tunnel and xfrm output path will still honour path mtu and also respond
with Packet-too-Big or fragmentation-needed errors if needed.

Changelog for all patches:
v2)
* enabled ip_forward_use_pmtu by default
* reworded
v3)
* disabled ip_forward_use_pmtu by default
* reworded
v4)
* renamed ip_dst_mtu_secure to ip_dst_mtu_maybe_forward
* updated changelog accordingly
* removed unneeded !!(... & ...) double negations

v2)
* by default we honour pmtu information
3)
* only honor interface mtu
* rewritten and simplified
* no knob to fall back to old mode any more

v2)
* reworded Documentation
====================
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parents 6c76a07a 8ed1dc44
...@@ -26,12 +26,36 @@ ip_no_pmtu_disc - INTEGER ...@@ -26,12 +26,36 @@ ip_no_pmtu_disc - INTEGER
discarded. Outgoing frames are handled the same as in mode 1, discarded. Outgoing frames are handled the same as in mode 1,
implicitly setting IP_PMTUDISC_DONT on every created socket. implicitly setting IP_PMTUDISC_DONT on every created socket.
Possible values: 0-2 Mode 3 is a hardend pmtu discover mode. The kernel will only
accept fragmentation-needed errors if the underlying protocol
can verify them besides a plain socket lookup. Current
protocols for which pmtu events will be honored are TCP, SCTP
and DCCP as they verify e.g. the sequence number or the
association. This mode should not be enabled globally but is
only intended to secure e.g. name servers in namespaces where
TCP path mtu must still work but path MTU information of other
protocols should be discarded. If enabled globally this mode
could break other protocols.
Possible values: 0-3
Default: FALSE Default: FALSE
min_pmtu - INTEGER min_pmtu - INTEGER
default 552 - minimum discovered Path MTU default 552 - minimum discovered Path MTU
ip_forward_use_pmtu - BOOLEAN
By default we don't trust protocol path MTUs while forwarding
because they could be easily forged and can lead to unwanted
fragmentation by the router.
You only need to enable this if you have user-space software
which tries to discover path mtus by itself and depends on the
kernel honoring this information. This is normally not the
case.
Default: 0 (disabled)
Possible values:
0 - disabled
1 - enabled
route/max_size - INTEGER route/max_size - INTEGER
Maximum number of routes allowed in the kernel. Increase Maximum number of routes allowed in the kernel. Increase
this when using large numbers of interfaces and/or routes. this when using large numbers of interfaces and/or routes.
......
...@@ -263,6 +263,39 @@ int ip_dont_fragment(struct sock *sk, struct dst_entry *dst) ...@@ -263,6 +263,39 @@ int ip_dont_fragment(struct sock *sk, struct dst_entry *dst)
!(dst_metric_locked(dst, RTAX_MTU))); !(dst_metric_locked(dst, RTAX_MTU)));
} }
static inline bool ip_sk_accept_pmtu(const struct sock *sk)
{
return inet_sk(sk)->pmtudisc != IP_PMTUDISC_INTERFACE;
}
static inline bool ip_sk_use_pmtu(const struct sock *sk)
{
return inet_sk(sk)->pmtudisc < IP_PMTUDISC_PROBE;
}
static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
bool forwarding)
{
struct net *net = dev_net(dst->dev);
if (net->ipv4.sysctl_ip_fwd_use_pmtu ||
dst_metric_locked(dst, RTAX_MTU) ||
!forwarding)
return dst_mtu(dst);
return min(dst->dev->mtu, IP_MAX_MTU);
}
static inline unsigned int ip_skb_dst_mtu(const struct sk_buff *skb)
{
if (!skb->sk || ip_sk_use_pmtu(skb->sk)) {
bool forwarding = IPCB(skb)->flags & IPSKB_FORWARDED;
return ip_dst_mtu_maybe_forward(skb_dst(skb), forwarding);
} else {
return min(skb_dst(skb)->dev->mtu, IP_MAX_MTU);
}
}
void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more); void __ip_select_ident(struct iphdr *iph, struct dst_entry *dst, int more);
static inline void ip_select_ident(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk) static inline void ip_select_ident(struct sk_buff *skb, struct dst_entry *dst, struct sock *sk)
......
...@@ -70,6 +70,7 @@ struct netns_ipv4 { ...@@ -70,6 +70,7 @@ struct netns_ipv4 {
int sysctl_tcp_ecn; int sysctl_tcp_ecn;
int sysctl_ip_no_pmtu_disc; int sysctl_ip_no_pmtu_disc;
int sysctl_ip_fwd_use_pmtu;
kgid_t sysctl_ping_group_range[2]; kgid_t sysctl_ping_group_range[2];
......
...@@ -43,7 +43,12 @@ struct net_protocol { ...@@ -43,7 +43,12 @@ struct net_protocol {
int (*handler)(struct sk_buff *skb); int (*handler)(struct sk_buff *skb);
void (*err_handler)(struct sk_buff *skb, u32 info); void (*err_handler)(struct sk_buff *skb, u32 info);
unsigned int no_policy:1, unsigned int no_policy:1,
netns_ok:1; netns_ok:1,
/* does the protocol do more stringent
* icmp tag validation than simple
* socket lookup?
*/
icmp_strict_tag_validation:1;
}; };
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
......
...@@ -36,6 +36,9 @@ ...@@ -36,6 +36,9 @@
#include <linux/cache.h> #include <linux/cache.h>
#include <linux/security.h> #include <linux/security.h>
/* IPv4 datagram length is stored into 16bit field (tot_len) */
#define IP_MAX_MTU 0xFFFFU
#define RTO_ONLINK 0x01 #define RTO_ONLINK 0x01
#define RT_CONN_FLAGS(sk) (RT_TOS(inet_sk(sk)->tos) | sock_flag(sk, SOCK_LOCALROUTE)) #define RT_CONN_FLAGS(sk) (RT_TOS(inet_sk(sk)->tos) | sock_flag(sk, SOCK_LOCALROUTE))
...@@ -311,20 +314,4 @@ static inline int ip4_dst_hoplimit(const struct dst_entry *dst) ...@@ -311,20 +314,4 @@ static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
return hoplimit; return hoplimit;
} }
static inline bool ip_sk_accept_pmtu(const struct sock *sk)
{
return inet_sk(sk)->pmtudisc != IP_PMTUDISC_INTERFACE;
}
static inline bool ip_sk_use_pmtu(const struct sock *sk)
{
return inet_sk(sk)->pmtudisc < IP_PMTUDISC_PROBE;
}
static inline int ip_skb_dst_mtu(const struct sk_buff *skb)
{
return (!skb->sk || ip_sk_use_pmtu(skb->sk)) ?
dst_mtu(skb_dst(skb)) : skb_dst(skb)->dev->mtu;
}
#endif /* _ROUTE_H */ #endif /* _ROUTE_H */
...@@ -989,6 +989,7 @@ static const struct net_protocol dccp_v4_protocol = { ...@@ -989,6 +989,7 @@ static const struct net_protocol dccp_v4_protocol = {
.err_handler = dccp_v4_err, .err_handler = dccp_v4_err,
.no_policy = 1, .no_policy = 1,
.netns_ok = 1, .netns_ok = 1,
.icmp_strict_tag_validation = 1,
}; };
static const struct proto_ops inet_dccp_ops = { static const struct proto_ops inet_dccp_ops = {
......
...@@ -1545,6 +1545,7 @@ static const struct net_protocol tcp_protocol = { ...@@ -1545,6 +1545,7 @@ static const struct net_protocol tcp_protocol = {
.err_handler = tcp_v4_err, .err_handler = tcp_v4_err,
.no_policy = 1, .no_policy = 1,
.netns_ok = 1, .netns_ok = 1,
.icmp_strict_tag_validation = 1,
}; };
static const struct net_protocol udp_protocol = { static const struct net_protocol udp_protocol = {
......
...@@ -668,6 +668,16 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info) ...@@ -668,6 +668,16 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
rcu_read_unlock(); rcu_read_unlock();
} }
static bool icmp_tag_validation(int proto)
{
bool ok;
rcu_read_lock();
ok = rcu_dereference(inet_protos[proto])->icmp_strict_tag_validation;
rcu_read_unlock();
return ok;
}
/* /*
* Handle ICMP_DEST_UNREACH, ICMP_TIME_EXCEED, ICMP_QUENCH, and * Handle ICMP_DEST_UNREACH, ICMP_TIME_EXCEED, ICMP_QUENCH, and
* ICMP_PARAMETERPROB. * ICMP_PARAMETERPROB.
...@@ -705,12 +715,22 @@ static void icmp_unreach(struct sk_buff *skb) ...@@ -705,12 +715,22 @@ static void icmp_unreach(struct sk_buff *skb)
case ICMP_PORT_UNREACH: case ICMP_PORT_UNREACH:
break; break;
case ICMP_FRAG_NEEDED: case ICMP_FRAG_NEEDED:
if (net->ipv4.sysctl_ip_no_pmtu_disc == 2) { /* for documentation of the ip_no_pmtu_disc
goto out; * values please see
} else if (net->ipv4.sysctl_ip_no_pmtu_disc) { * Documentation/networking/ip-sysctl.txt
*/
switch (net->ipv4.sysctl_ip_no_pmtu_disc) {
default:
LIMIT_NETDEBUG(KERN_INFO pr_fmt("%pI4: fragmentation needed and DF set\n"), LIMIT_NETDEBUG(KERN_INFO pr_fmt("%pI4: fragmentation needed and DF set\n"),
&iph->daddr); &iph->daddr);
} else { break;
case 2:
goto out;
case 3:
if (!icmp_tag_validation(iph->protocol))
goto out;
/* fall through */
case 0:
info = ntohs(icmph->un.frag.mtu); info = ntohs(icmph->un.frag.mtu);
if (!info) if (!info)
goto out; goto out;
......
...@@ -54,6 +54,7 @@ static int ip_forward_finish(struct sk_buff *skb) ...@@ -54,6 +54,7 @@ static int ip_forward_finish(struct sk_buff *skb)
int ip_forward(struct sk_buff *skb) int ip_forward(struct sk_buff *skb)
{ {
u32 mtu;
struct iphdr *iph; /* Our header */ struct iphdr *iph; /* Our header */
struct rtable *rt; /* Route we use */ struct rtable *rt; /* Route we use */
struct ip_options *opt = &(IPCB(skb)->opt); struct ip_options *opt = &(IPCB(skb)->opt);
...@@ -88,11 +89,13 @@ int ip_forward(struct sk_buff *skb) ...@@ -88,11 +89,13 @@ int ip_forward(struct sk_buff *skb)
if (opt->is_strictroute && rt->rt_uses_gateway) if (opt->is_strictroute && rt->rt_uses_gateway)
goto sr_failed; goto sr_failed;
if (unlikely(skb->len > dst_mtu(&rt->dst) && !skb_is_gso(skb) && IPCB(skb)->flags |= IPSKB_FORWARDED;
mtu = ip_dst_mtu_maybe_forward(&rt->dst, true);
if (unlikely(skb->len > mtu && !skb_is_gso(skb) &&
(ip_hdr(skb)->frag_off & htons(IP_DF))) && !skb->local_df) { (ip_hdr(skb)->frag_off & htons(IP_DF))) && !skb->local_df) {
IP_INC_STATS(dev_net(rt->dst.dev), IPSTATS_MIB_FRAGFAILS); IP_INC_STATS(dev_net(rt->dst.dev), IPSTATS_MIB_FRAGFAILS);
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(dst_mtu(&rt->dst))); htonl(mtu));
goto drop; goto drop;
} }
......
...@@ -449,6 +449,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *)) ...@@ -449,6 +449,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))
__be16 not_last_frag; __be16 not_last_frag;
struct rtable *rt = skb_rtable(skb); struct rtable *rt = skb_rtable(skb);
int err = 0; int err = 0;
bool forwarding = IPCB(skb)->flags & IPSKB_FORWARDED;
dev = rt->dst.dev; dev = rt->dst.dev;
...@@ -458,12 +459,13 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *)) ...@@ -458,12 +459,13 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))
iph = ip_hdr(skb); iph = ip_hdr(skb);
mtu = ip_dst_mtu_maybe_forward(&rt->dst, forwarding);
if (unlikely(((iph->frag_off & htons(IP_DF)) && !skb->local_df) || if (unlikely(((iph->frag_off & htons(IP_DF)) && !skb->local_df) ||
(IPCB(skb)->frag_max_size && (IPCB(skb)->frag_max_size &&
IPCB(skb)->frag_max_size > dst_mtu(&rt->dst)))) { IPCB(skb)->frag_max_size > mtu))) {
IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS); IP_INC_STATS(dev_net(dev), IPSTATS_MIB_FRAGFAILS);
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
htonl(ip_skb_dst_mtu(skb))); htonl(mtu));
kfree_skb(skb); kfree_skb(skb);
return -EMSGSIZE; return -EMSGSIZE;
} }
...@@ -473,7 +475,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *)) ...@@ -473,7 +475,7 @@ int ip_fragment(struct sk_buff *skb, int (*output)(struct sk_buff *))
*/ */
hlen = iph->ihl * 4; hlen = iph->ihl * 4;
mtu = dst_mtu(&rt->dst) - hlen; /* Size of data space */ mtu = mtu - hlen; /* Size of data space */
#ifdef CONFIG_BRIDGE_NETFILTER #ifdef CONFIG_BRIDGE_NETFILTER
if (skb->nf_bridge) if (skb->nf_bridge)
mtu -= nf_bridge_mtu_reduction(skb); mtu -= nf_bridge_mtu_reduction(skb);
......
...@@ -112,9 +112,6 @@ ...@@ -112,9 +112,6 @@
#define RT_FL_TOS(oldflp4) \ #define RT_FL_TOS(oldflp4) \
((oldflp4)->flowi4_tos & (IPTOS_RT_MASK | RTO_ONLINK)) ((oldflp4)->flowi4_tos & (IPTOS_RT_MASK | RTO_ONLINK))
/* IPv4 datagram length is stored into 16bit field (tot_len) */
#define IP_MAX_MTU 0xFFFF
#define RT_GC_TIMEOUT (300*HZ) #define RT_GC_TIMEOUT (300*HZ)
static int ip_rt_max_size; static int ip_rt_max_size;
......
...@@ -831,6 +831,13 @@ static struct ctl_table ipv4_net_table[] = { ...@@ -831,6 +831,13 @@ static struct ctl_table ipv4_net_table[] = {
.mode = 0644, .mode = 0644,
.proc_handler = proc_dointvec .proc_handler = proc_dointvec
}, },
{
.procname = "ip_forward_use_pmtu",
.data = &init_net.ipv4.sysctl_ip_fwd_use_pmtu,
.maxlen = sizeof(int),
.mode = 0644,
.proc_handler = proc_dointvec,
},
{ } { }
}; };
......
...@@ -321,6 +321,27 @@ static inline int ip6_forward_finish(struct sk_buff *skb) ...@@ -321,6 +321,27 @@ static inline int ip6_forward_finish(struct sk_buff *skb)
return dst_output(skb); return dst_output(skb);
} }
static unsigned int ip6_dst_mtu_forward(const struct dst_entry *dst)
{
unsigned int mtu;
struct inet6_dev *idev;
if (dst_metric_locked(dst, RTAX_MTU)) {
mtu = dst_metric_raw(dst, RTAX_MTU);
if (mtu)
return mtu;
}
mtu = IPV6_MIN_MTU;
rcu_read_lock();
idev = __in6_dev_get(dst->dev);
if (idev)
mtu = idev->cnf.mtu6;
rcu_read_unlock();
return mtu;
}
int ip6_forward(struct sk_buff *skb) int ip6_forward(struct sk_buff *skb)
{ {
struct dst_entry *dst = skb_dst(skb); struct dst_entry *dst = skb_dst(skb);
...@@ -441,7 +462,7 @@ int ip6_forward(struct sk_buff *skb) ...@@ -441,7 +462,7 @@ int ip6_forward(struct sk_buff *skb)
} }
} }
mtu = dst_mtu(dst); mtu = ip6_dst_mtu_forward(dst);
if (mtu < IPV6_MIN_MTU) if (mtu < IPV6_MIN_MTU)
mtu = IPV6_MIN_MTU; mtu = IPV6_MIN_MTU;
......
...@@ -1030,6 +1030,7 @@ static const struct net_protocol sctp_protocol = { ...@@ -1030,6 +1030,7 @@ static const struct net_protocol sctp_protocol = {
.err_handler = sctp_v4_err, .err_handler = sctp_v4_err,
.no_policy = 1, .no_policy = 1,
.netns_ok = 1, .netns_ok = 1,
.icmp_strict_tag_validation = 1,
}; };
/* IPv4 address related functions. */ /* IPv4 address related functions. */
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment