Commit e7096c13 authored by Jason A. Donenfeld's avatar Jason A. Donenfeld Committed by David S. Miller

net: WireGuard secure network tunnel

WireGuard is a layer 3 secure networking tunnel made specifically for
the kernel, that aims to be much simpler and easier to audit than IPsec.
Extensive documentation and description of the protocol and
considerations, along with formal proofs of the cryptography, are
available at:

  * https://www.wireguard.com/
  * https://www.wireguard.com/papers/wireguard.pdf

This commit implements WireGuard as a simple network device driver,
accessible in the usual RTNL way used by virtual network drivers. It
makes use of the udp_tunnel APIs, GRO, GSO, NAPI, and the usual set of
networking subsystem APIs. It has a somewhat novel multicore queueing
system designed for maximum throughput and minimal latency of encryption
operations, but it is implemented modestly using workqueues and NAPI.
Configuration is done via generic Netlink, and following a review from
the Netlink maintainer a year ago, several high profile userspace tools
have already implemented the API.

This commit also comes with several different tests, both in-kernel
tests and out-of-kernel tests based on network namespaces, taking profit
of the fact that sockets used by WireGuard intentionally stay in the
namespace the WireGuard interface was originally created, exactly like
the semantics of userspace tun devices. See wireguard.com/netns/ for
pictures and examples.

The source code is fairly short, but rather than combining everything
into a single file, WireGuard is developed as cleanly separable files,
making auditing and comprehension easier. Things are laid out as
follows:

  * noise.[ch], cookie.[ch], messages.h: These implement the bulk of the
    cryptographic aspects of the protocol, and are mostly data-only in
    nature, taking in buffers of bytes and spitting out buffers of
    bytes. They also handle reference counting for their various shared
    pieces of data, like keys and key lists.

  * ratelimiter.[ch]: Used as an integral part of cookie.[ch] for
    ratelimiting certain types of cryptographic operations in accordance
    with particular WireGuard semantics.

  * allowedips.[ch], peerlookup.[ch]: The main lookup structures of
    WireGuard, the former being trie-like with particular semantics, an
    integral part of the design of the protocol, and the latter just
    being nice helper functions around the various hashtables we use.

  * device.[ch]: Implementation of functions for the netdevice and for
    rtnl, responsible for maintaining the life of a given interface and
    wiring it up to the rest of WireGuard.

  * peer.[ch]: Each interface has a list of peers, with helper functions
    available here for creation, destruction, and reference counting.

  * socket.[ch]: Implementation of functions related to udp_socket and
    the general set of kernel socket APIs, for sending and receiving
    ciphertext UDP packets, and taking care of WireGuard-specific sticky
    socket routing semantics for the automatic roaming.

  * netlink.[ch]: Userspace API entry point for configuring WireGuard
    peers and devices. The API has been implemented by several userspace
    tools and network management utility, and the WireGuard project
    distributes the basic wg(8) tool.

  * queueing.[ch]: Shared function on the rx and tx path for handling
    the various queues used in the multicore algorithms.

  * send.c: Handles encrypting outgoing packets in parallel on
    multiple cores, before sending them in order on a single core, via
    workqueues and ring buffers. Also handles sending handshake and cookie
    messages as part of the protocol, in parallel.

  * receive.c: Handles decrypting incoming packets in parallel on
    multiple cores, before passing them off in order to be ingested via
    the rest of the networking subsystem with GRO via the typical NAPI
    poll function. Also handles receiving handshake and cookie messages
    as part of the protocol, in parallel.

  * timers.[ch]: Uses the timer wheel to implement protocol particular
    event timeouts, and gives a set of very simple event-driven entry
    point functions for callers.

  * main.c, version.h: Initialization and deinitialization of the module.

  * selftest/*.h: Runtime unit tests for some of the most security
    sensitive functions.

  * tools/testing/selftests/wireguard/netns.sh: Aforementioned testing
    script using network namespaces.

This commit aims to be as self-contained as possible, implementing
WireGuard as a standalone module not needing much special handling or
coordination from the network subsystem. I expect for future
optimizations to the network stack to positively improve WireGuard, and
vice-versa, but for the time being, this exists as intentionally
standalone.

We introduce a menu option for CONFIG_WIREGUARD, as well as providing a
verbose debug log and self-tests via CONFIG_WIREGUARD_DEBUG.
Signed-off-by: default avatarJason A. Donenfeld <Jason@zx2c4.com>
Cc: David Miller <davem@davemloft.net>
Cc: Greg KH <gregkh@linuxfoundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: netdev@vger.kernel.org
Signed-off-by: default avatarDavid S. Miller <davem@davemloft.net>
parent e42617b8
......@@ -17850,6 +17850,14 @@ L: linux-gpio@vger.kernel.org
S: Maintained
F: drivers/gpio/gpio-ws16c48.c
WIREGUARD SECURE NETWORK TUNNEL
M: Jason A. Donenfeld <Jason@zx2c4.com>
S: Maintained
F: drivers/net/wireguard/
F: tools/testing/selftests/wireguard/
L: wireguard@lists.zx2c4.com
L: netdev@vger.kernel.org
WISTRON LAPTOP BUTTON DRIVER
M: Miloslav Trmac <mitr@volny.cz>
S: Maintained
......
......@@ -71,6 +71,47 @@ config DUMMY
To compile this driver as a module, choose M here: the module
will be called dummy.
config WIREGUARD
tristate "WireGuard secure network tunnel"
depends on NET && INET
depends on IPV6 || !IPV6
select NET_UDP_TUNNEL
select DST_CACHE
select CRYPTO
select CRYPTO_LIB_CURVE25519
select CRYPTO_LIB_CHACHA20POLY1305
select CRYPTO_LIB_BLAKE2S
select CRYPTO_CHACHA20_X86_64 if X86 && 64BIT
select CRYPTO_POLY1305_X86_64 if X86 && 64BIT
select CRYPTO_BLAKE2S_X86 if X86 && 64BIT
select CRYPTO_CURVE25519_X86 if X86 && 64BIT
select CRYPTO_CHACHA20_NEON if (ARM || ARM64) && KERNEL_MODE_NEON
select CRYPTO_POLY1305_NEON if ARM64 && KERNEL_MODE_NEON
select CRYPTO_POLY1305_ARM if ARM
select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON
select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2
select CRYPTO_POLY1305_MIPS if CPU_MIPS32 || (CPU_MIPS64 && 64BIT)
help
WireGuard is a secure, fast, and easy to use replacement for IPSec
that uses modern cryptography and clever networking tricks. It's
designed to be fairly general purpose and abstract enough to fit most
use cases, while at the same time remaining extremely simple to
configure. See www.wireguard.com for more info.
It's safe to say Y or M here, as the driver is very lightweight and
is only in use when an administrator chooses to add an interface.
config WIREGUARD_DEBUG
bool "Debugging checks and verbose messages"
depends on WIREGUARD
help
This will write log messages for handshake and other events
that occur for a WireGuard interface. It will also perform some
extra validation checks and unit tests at various points. This is
only useful for debugging.
Say N here unless you know what you're doing.
config EQUALIZER
tristate "EQL (serial line load balancing) support"
---help---
......
......@@ -10,6 +10,7 @@ obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_IPVLAN) += ipvlan/
obj-$(CONFIG_IPVTAP) += ipvlan/
obj-$(CONFIG_DUMMY) += dummy.o
obj-$(CONFIG_WIREGUARD) += wireguard/
obj-$(CONFIG_EQUALIZER) += eql.o
obj-$(CONFIG_IFB) += ifb.o
obj-$(CONFIG_MACSEC) += macsec.o
......
ccflags-y := -O3
ccflags-y += -D'pr_fmt(fmt)=KBUILD_MODNAME ": " fmt'
ccflags-$(CONFIG_WIREGUARD_DEBUG) += -DDEBUG
wireguard-y := main.o
wireguard-y += noise.o
wireguard-y += device.o
wireguard-y += peer.o
wireguard-y += timers.o
wireguard-y += queueing.o
wireguard-y += send.o
wireguard-y += receive.o
wireguard-y += socket.o
wireguard-y += peerlookup.o
wireguard-y += allowedips.o
wireguard-y += ratelimiter.o
wireguard-y += cookie.o
wireguard-y += netlink.o
obj-$(CONFIG_WIREGUARD) := wireguard.o
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_ALLOWEDIPS_H
#define _WG_ALLOWEDIPS_H
#include <linux/mutex.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
struct wg_peer;
struct allowedips_node {
struct wg_peer __rcu *peer;
struct allowedips_node __rcu *bit[2];
/* While it may seem scandalous that we waste space for v4,
* we're alloc'ing to the nearest power of 2 anyway, so this
* doesn't actually make a difference.
*/
u8 bits[16] __aligned(__alignof(u64));
u8 cidr, bit_at_a, bit_at_b, bitlen;
/* Keep rarely used list at bottom to be beyond cache line. */
union {
struct list_head peer_list;
struct rcu_head rcu;
};
};
struct allowedips {
struct allowedips_node __rcu *root4;
struct allowedips_node __rcu *root6;
u64 seq;
};
void wg_allowedips_init(struct allowedips *table);
void wg_allowedips_free(struct allowedips *table, struct mutex *mutex);
int wg_allowedips_insert_v4(struct allowedips *table, const struct in_addr *ip,
u8 cidr, struct wg_peer *peer, struct mutex *lock);
int wg_allowedips_insert_v6(struct allowedips *table, const struct in6_addr *ip,
u8 cidr, struct wg_peer *peer, struct mutex *lock);
void wg_allowedips_remove_by_peer(struct allowedips *table,
struct wg_peer *peer, struct mutex *lock);
/* The ip input pointer should be __aligned(__alignof(u64))) */
int wg_allowedips_read_node(struct allowedips_node *node, u8 ip[16], u8 *cidr);
/* These return a strong reference to a peer: */
struct wg_peer *wg_allowedips_lookup_dst(struct allowedips *table,
struct sk_buff *skb);
struct wg_peer *wg_allowedips_lookup_src(struct allowedips *table,
struct sk_buff *skb);
#ifdef DEBUG
bool wg_allowedips_selftest(void);
#endif
#endif /* _WG_ALLOWEDIPS_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "cookie.h"
#include "peer.h"
#include "device.h"
#include "messages.h"
#include "ratelimiter.h"
#include "timers.h"
#include <crypto/blake2s.h>
#include <crypto/chacha20poly1305.h>
#include <net/ipv6.h>
#include <crypto/algapi.h>
void wg_cookie_checker_init(struct cookie_checker *checker,
struct wg_device *wg)
{
init_rwsem(&checker->secret_lock);
checker->secret_birthdate = ktime_get_coarse_boottime_ns();
get_random_bytes(checker->secret, NOISE_HASH_LEN);
checker->device = wg;
}
enum { COOKIE_KEY_LABEL_LEN = 8 };
static const u8 mac1_key_label[COOKIE_KEY_LABEL_LEN] = "mac1----";
static const u8 cookie_key_label[COOKIE_KEY_LABEL_LEN] = "cookie--";
static void precompute_key(u8 key[NOISE_SYMMETRIC_KEY_LEN],
const u8 pubkey[NOISE_PUBLIC_KEY_LEN],
const u8 label[COOKIE_KEY_LABEL_LEN])
{
struct blake2s_state blake;
blake2s_init(&blake, NOISE_SYMMETRIC_KEY_LEN);
blake2s_update(&blake, label, COOKIE_KEY_LABEL_LEN);
blake2s_update(&blake, pubkey, NOISE_PUBLIC_KEY_LEN);
blake2s_final(&blake, key);
}
/* Must hold peer->handshake.static_identity->lock */
void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker)
{
if (likely(checker->device->static_identity.has_identity)) {
precompute_key(checker->cookie_encryption_key,
checker->device->static_identity.static_public,
cookie_key_label);
precompute_key(checker->message_mac1_key,
checker->device->static_identity.static_public,
mac1_key_label);
} else {
memset(checker->cookie_encryption_key, 0,
NOISE_SYMMETRIC_KEY_LEN);
memset(checker->message_mac1_key, 0, NOISE_SYMMETRIC_KEY_LEN);
}
}
void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer)
{
precompute_key(peer->latest_cookie.cookie_decryption_key,
peer->handshake.remote_static, cookie_key_label);
precompute_key(peer->latest_cookie.message_mac1_key,
peer->handshake.remote_static, mac1_key_label);
}
void wg_cookie_init(struct cookie *cookie)
{
memset(cookie, 0, sizeof(*cookie));
init_rwsem(&cookie->lock);
}
static void compute_mac1(u8 mac1[COOKIE_LEN], const void *message, size_t len,
const u8 key[NOISE_SYMMETRIC_KEY_LEN])
{
len = len - sizeof(struct message_macs) +
offsetof(struct message_macs, mac1);
blake2s(mac1, message, key, COOKIE_LEN, len, NOISE_SYMMETRIC_KEY_LEN);
}
static void compute_mac2(u8 mac2[COOKIE_LEN], const void *message, size_t len,
const u8 cookie[COOKIE_LEN])
{
len = len - sizeof(struct message_macs) +
offsetof(struct message_macs, mac2);
blake2s(mac2, message, cookie, COOKIE_LEN, len, COOKIE_LEN);
}
static void make_cookie(u8 cookie[COOKIE_LEN], struct sk_buff *skb,
struct cookie_checker *checker)
{
struct blake2s_state state;
if (wg_birthdate_has_expired(checker->secret_birthdate,
COOKIE_SECRET_MAX_AGE)) {
down_write(&checker->secret_lock);
checker->secret_birthdate = ktime_get_coarse_boottime_ns();
get_random_bytes(checker->secret, NOISE_HASH_LEN);
up_write(&checker->secret_lock);
}
down_read(&checker->secret_lock);
blake2s_init_key(&state, COOKIE_LEN, checker->secret, NOISE_HASH_LEN);
if (skb->protocol == htons(ETH_P_IP))
blake2s_update(&state, (u8 *)&ip_hdr(skb)->saddr,
sizeof(struct in_addr));
else if (skb->protocol == htons(ETH_P_IPV6))
blake2s_update(&state, (u8 *)&ipv6_hdr(skb)->saddr,
sizeof(struct in6_addr));
blake2s_update(&state, (u8 *)&udp_hdr(skb)->source, sizeof(__be16));
blake2s_final(&state, cookie);
up_read(&checker->secret_lock);
}
enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
struct sk_buff *skb,
bool check_cookie)
{
struct message_macs *macs = (struct message_macs *)
(skb->data + skb->len - sizeof(*macs));
enum cookie_mac_state ret;
u8 computed_mac[COOKIE_LEN];
u8 cookie[COOKIE_LEN];
ret = INVALID_MAC;
compute_mac1(computed_mac, skb->data, skb->len,
checker->message_mac1_key);
if (crypto_memneq(computed_mac, macs->mac1, COOKIE_LEN))
goto out;
ret = VALID_MAC_BUT_NO_COOKIE;
if (!check_cookie)
goto out;
make_cookie(cookie, skb, checker);
compute_mac2(computed_mac, skb->data, skb->len, cookie);
if (crypto_memneq(computed_mac, macs->mac2, COOKIE_LEN))
goto out;
ret = VALID_MAC_WITH_COOKIE_BUT_RATELIMITED;
if (!wg_ratelimiter_allow(skb, dev_net(checker->device->dev)))
goto out;
ret = VALID_MAC_WITH_COOKIE;
out:
return ret;
}
void wg_cookie_add_mac_to_packet(void *message, size_t len,
struct wg_peer *peer)
{
struct message_macs *macs = (struct message_macs *)
((u8 *)message + len - sizeof(*macs));
down_write(&peer->latest_cookie.lock);
compute_mac1(macs->mac1, message, len,
peer->latest_cookie.message_mac1_key);
memcpy(peer->latest_cookie.last_mac1_sent, macs->mac1, COOKIE_LEN);
peer->latest_cookie.have_sent_mac1 = true;
up_write(&peer->latest_cookie.lock);
down_read(&peer->latest_cookie.lock);
if (peer->latest_cookie.is_valid &&
!wg_birthdate_has_expired(peer->latest_cookie.birthdate,
COOKIE_SECRET_MAX_AGE - COOKIE_SECRET_LATENCY))
compute_mac2(macs->mac2, message, len,
peer->latest_cookie.cookie);
else
memset(macs->mac2, 0, COOKIE_LEN);
up_read(&peer->latest_cookie.lock);
}
void wg_cookie_message_create(struct message_handshake_cookie *dst,
struct sk_buff *skb, __le32 index,
struct cookie_checker *checker)
{
struct message_macs *macs = (struct message_macs *)
((u8 *)skb->data + skb->len - sizeof(*macs));
u8 cookie[COOKIE_LEN];
dst->header.type = cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE);
dst->receiver_index = index;
get_random_bytes_wait(dst->nonce, COOKIE_NONCE_LEN);
make_cookie(cookie, skb, checker);
xchacha20poly1305_encrypt(dst->encrypted_cookie, cookie, COOKIE_LEN,
macs->mac1, COOKIE_LEN, dst->nonce,
checker->cookie_encryption_key);
}
void wg_cookie_message_consume(struct message_handshake_cookie *src,
struct wg_device *wg)
{
struct wg_peer *peer = NULL;
u8 cookie[COOKIE_LEN];
bool ret;
if (unlikely(!wg_index_hashtable_lookup(wg->index_hashtable,
INDEX_HASHTABLE_HANDSHAKE |
INDEX_HASHTABLE_KEYPAIR,
src->receiver_index, &peer)))
return;
down_read(&peer->latest_cookie.lock);
if (unlikely(!peer->latest_cookie.have_sent_mac1)) {
up_read(&peer->latest_cookie.lock);
goto out;
}
ret = xchacha20poly1305_decrypt(
cookie, src->encrypted_cookie, sizeof(src->encrypted_cookie),
peer->latest_cookie.last_mac1_sent, COOKIE_LEN, src->nonce,
peer->latest_cookie.cookie_decryption_key);
up_read(&peer->latest_cookie.lock);
if (ret) {
down_write(&peer->latest_cookie.lock);
memcpy(peer->latest_cookie.cookie, cookie, COOKIE_LEN);
peer->latest_cookie.birthdate = ktime_get_coarse_boottime_ns();
peer->latest_cookie.is_valid = true;
peer->latest_cookie.have_sent_mac1 = false;
up_write(&peer->latest_cookie.lock);
} else {
net_dbg_ratelimited("%s: Could not decrypt invalid cookie response\n",
wg->dev->name);
}
out:
wg_peer_put(peer);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_COOKIE_H
#define _WG_COOKIE_H
#include "messages.h"
#include <linux/rwsem.h>
struct wg_peer;
struct cookie_checker {
u8 secret[NOISE_HASH_LEN];
u8 cookie_encryption_key[NOISE_SYMMETRIC_KEY_LEN];
u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
u64 secret_birthdate;
struct rw_semaphore secret_lock;
struct wg_device *device;
};
struct cookie {
u64 birthdate;
bool is_valid;
u8 cookie[COOKIE_LEN];
bool have_sent_mac1;
u8 last_mac1_sent[COOKIE_LEN];
u8 cookie_decryption_key[NOISE_SYMMETRIC_KEY_LEN];
u8 message_mac1_key[NOISE_SYMMETRIC_KEY_LEN];
struct rw_semaphore lock;
};
enum cookie_mac_state {
INVALID_MAC,
VALID_MAC_BUT_NO_COOKIE,
VALID_MAC_WITH_COOKIE_BUT_RATELIMITED,
VALID_MAC_WITH_COOKIE
};
void wg_cookie_checker_init(struct cookie_checker *checker,
struct wg_device *wg);
void wg_cookie_checker_precompute_device_keys(struct cookie_checker *checker);
void wg_cookie_checker_precompute_peer_keys(struct wg_peer *peer);
void wg_cookie_init(struct cookie *cookie);
enum cookie_mac_state wg_cookie_validate_packet(struct cookie_checker *checker,
struct sk_buff *skb,
bool check_cookie);
void wg_cookie_add_mac_to_packet(void *message, size_t len,
struct wg_peer *peer);
void wg_cookie_message_create(struct message_handshake_cookie *src,
struct sk_buff *skb, __le32 index,
struct cookie_checker *checker);
void wg_cookie_message_consume(struct message_handshake_cookie *src,
struct wg_device *wg);
#endif /* _WG_COOKIE_H */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_DEVICE_H
#define _WG_DEVICE_H
#include "noise.h"
#include "allowedips.h"
#include "peerlookup.h"
#include "cookie.h"
#include <linux/types.h>
#include <linux/netdevice.h>
#include <linux/workqueue.h>
#include <linux/mutex.h>
#include <linux/net.h>
#include <linux/ptr_ring.h>
struct wg_device;
struct multicore_worker {
void *ptr;
struct work_struct work;
};
struct crypt_queue {
struct ptr_ring ring;
union {
struct {
struct multicore_worker __percpu *worker;
int last_cpu;
};
struct work_struct work;
};
};
struct wg_device {
struct net_device *dev;
struct crypt_queue encrypt_queue, decrypt_queue;
struct sock __rcu *sock4, *sock6;
struct net *creating_net;
struct noise_static_identity static_identity;
struct workqueue_struct *handshake_receive_wq, *handshake_send_wq;
struct workqueue_struct *packet_crypt_wq;
struct sk_buff_head incoming_handshakes;
int incoming_handshake_cpu;
struct multicore_worker __percpu *incoming_handshakes_worker;
struct cookie_checker cookie_checker;
struct pubkey_hashtable *peer_hashtable;
struct index_hashtable *index_hashtable;
struct allowedips peer_allowedips;
struct mutex device_update_lock, socket_update_lock;
struct list_head device_list, peer_list;
unsigned int num_peers, device_update_gen;
u32 fwmark;
u16 incoming_port;
bool have_creating_net_ref;
};
int wg_device_init(void);
void wg_device_uninit(void);
/* Later after the dust settles, this can be moved into include/linux/skbuff.h,
* where virtually all code that deals with GSO segs can benefit, around ~30
* drivers as of writing.
*/
#define skb_list_walk_safe(first, skb, next) \
for (skb = first, next = skb->next; skb; \
skb = next, next = skb ? skb->next : NULL)
#endif /* _WG_DEVICE_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "version.h"
#include "device.h"
#include "noise.h"
#include "queueing.h"
#include "ratelimiter.h"
#include "netlink.h"
#include <uapi/linux/wireguard.h>
#include <linux/version.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/genetlink.h>
#include <net/rtnetlink.h>
static int __init mod_init(void)
{
int ret;
#ifdef DEBUG
if (!wg_allowedips_selftest() || !wg_packet_counter_selftest() ||
!wg_ratelimiter_selftest())
return -ENOTRECOVERABLE;
#endif
wg_noise_init();
ret = wg_device_init();
if (ret < 0)
goto err_device;
ret = wg_genetlink_init();
if (ret < 0)
goto err_netlink;
pr_info("WireGuard " WIREGUARD_VERSION " loaded. See www.wireguard.com for information.\n");
pr_info("Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.\n");
return 0;
err_netlink:
wg_device_uninit();
err_device:
return ret;
}
static void __exit mod_exit(void)
{
wg_genetlink_uninit();
wg_device_uninit();
}
module_init(mod_init);
module_exit(mod_exit);
MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("WireGuard secure network tunnel");
MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");
MODULE_VERSION(WIREGUARD_VERSION);
MODULE_ALIAS_RTNL_LINK(KBUILD_MODNAME);
MODULE_ALIAS_GENL_FAMILY(WG_GENL_NAME);
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_MESSAGES_H
#define _WG_MESSAGES_H
#include <crypto/curve25519.h>
#include <crypto/chacha20poly1305.h>
#include <crypto/blake2s.h>
#include <linux/kernel.h>
#include <linux/param.h>
#include <linux/skbuff.h>
enum noise_lengths {
NOISE_PUBLIC_KEY_LEN = CURVE25519_KEY_SIZE,
NOISE_SYMMETRIC_KEY_LEN = CHACHA20POLY1305_KEY_SIZE,
NOISE_TIMESTAMP_LEN = sizeof(u64) + sizeof(u32),
NOISE_AUTHTAG_LEN = CHACHA20POLY1305_AUTHTAG_SIZE,
NOISE_HASH_LEN = BLAKE2S_HASH_SIZE
};
#define noise_encrypted_len(plain_len) ((plain_len) + NOISE_AUTHTAG_LEN)
enum cookie_values {
COOKIE_SECRET_MAX_AGE = 2 * 60,
COOKIE_SECRET_LATENCY = 5,
COOKIE_NONCE_LEN = XCHACHA20POLY1305_NONCE_SIZE,
COOKIE_LEN = 16
};
enum counter_values {
COUNTER_BITS_TOTAL = 2048,
COUNTER_REDUNDANT_BITS = BITS_PER_LONG,
COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS
};
enum limits {
REKEY_AFTER_MESSAGES = 1ULL << 60,
REJECT_AFTER_MESSAGES = U64_MAX - COUNTER_WINDOW_SIZE - 1,
REKEY_TIMEOUT = 5,
REKEY_TIMEOUT_JITTER_MAX_JIFFIES = HZ / 3,
REKEY_AFTER_TIME = 120,
REJECT_AFTER_TIME = 180,
INITIATIONS_PER_SECOND = 50,
MAX_PEERS_PER_DEVICE = 1U << 20,
KEEPALIVE_TIMEOUT = 10,
MAX_TIMER_HANDSHAKES = 90 / REKEY_TIMEOUT,
MAX_QUEUED_INCOMING_HANDSHAKES = 4096, /* TODO: replace this with DQL */
MAX_STAGED_PACKETS = 128,
MAX_QUEUED_PACKETS = 1024 /* TODO: replace this with DQL */
};
enum message_type {
MESSAGE_INVALID = 0,
MESSAGE_HANDSHAKE_INITIATION = 1,
MESSAGE_HANDSHAKE_RESPONSE = 2,
MESSAGE_HANDSHAKE_COOKIE = 3,
MESSAGE_DATA = 4
};
struct message_header {
/* The actual layout of this that we want is:
* u8 type
* u8 reserved_zero[3]
*
* But it turns out that by encoding this as little endian,
* we achieve the same thing, and it makes checking faster.
*/
__le32 type;
};
struct message_macs {
u8 mac1[COOKIE_LEN];
u8 mac2[COOKIE_LEN];
};
struct message_handshake_initiation {
struct message_header header;
__le32 sender_index;
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
u8 encrypted_static[noise_encrypted_len(NOISE_PUBLIC_KEY_LEN)];
u8 encrypted_timestamp[noise_encrypted_len(NOISE_TIMESTAMP_LEN)];
struct message_macs macs;
};
struct message_handshake_response {
struct message_header header;
__le32 sender_index;
__le32 receiver_index;
u8 unencrypted_ephemeral[NOISE_PUBLIC_KEY_LEN];
u8 encrypted_nothing[noise_encrypted_len(0)];
struct message_macs macs;
};
struct message_handshake_cookie {
struct message_header header;
__le32 receiver_index;
u8 nonce[COOKIE_NONCE_LEN];
u8 encrypted_cookie[noise_encrypted_len(COOKIE_LEN)];
};
struct message_data {
struct message_header header;
__le32 key_idx;
__le64 counter;
u8 encrypted_data[];
};
#define message_data_len(plain_len) \
(noise_encrypted_len(plain_len) + sizeof(struct message_data))
enum message_alignments {
MESSAGE_PADDING_MULTIPLE = 16,
MESSAGE_MINIMUM_LENGTH = message_data_len(0)
};
#define SKB_HEADER_LEN \
(max(sizeof(struct iphdr), sizeof(struct ipv6hdr)) + \
sizeof(struct udphdr) + NET_SKB_PAD)
#define DATA_PACKET_HEAD_ROOM \
ALIGN(sizeof(struct message_data) + SKB_HEADER_LEN, 4)
enum { HANDSHAKE_DSCP = 0x88 /* AF41, plus 00 ECN */ };
#endif /* _WG_MESSAGES_H */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_NETLINK_H
#define _WG_NETLINK_H
int wg_genetlink_init(void);
void wg_genetlink_uninit(void);
#endif /* _WG_NETLINK_H */
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_NOISE_H
#define _WG_NOISE_H
#include "messages.h"
#include "peerlookup.h"
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/atomic.h>
#include <linux/rwsem.h>
#include <linux/mutex.h>
#include <linux/kref.h>
union noise_counter {
struct {
u64 counter;
unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG];
spinlock_t lock;
} receive;
atomic64_t counter;
};
struct noise_symmetric_key {
u8 key[NOISE_SYMMETRIC_KEY_LEN];
union noise_counter counter;
u64 birthdate;
bool is_valid;
};
struct noise_keypair {
struct index_hashtable_entry entry;
struct noise_symmetric_key sending;
struct noise_symmetric_key receiving;
__le32 remote_index;
bool i_am_the_initiator;
struct kref refcount;
struct rcu_head rcu;
u64 internal_id;
};
struct noise_keypairs {
struct noise_keypair __rcu *current_keypair;
struct noise_keypair __rcu *previous_keypair;
struct noise_keypair __rcu *next_keypair;
spinlock_t keypair_update_lock;
};
struct noise_static_identity {
u8 static_public[NOISE_PUBLIC_KEY_LEN];
u8 static_private[NOISE_PUBLIC_KEY_LEN];
struct rw_semaphore lock;
bool has_identity;
};
enum noise_handshake_state {
HANDSHAKE_ZEROED,
HANDSHAKE_CREATED_INITIATION,
HANDSHAKE_CONSUMED_INITIATION,
HANDSHAKE_CREATED_RESPONSE,
HANDSHAKE_CONSUMED_RESPONSE
};
struct noise_handshake {
struct index_hashtable_entry entry;
enum noise_handshake_state state;
u64 last_initiation_consumption;
struct noise_static_identity *static_identity;
u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN];
u8 remote_static[NOISE_PUBLIC_KEY_LEN];
u8 remote_ephemeral[NOISE_PUBLIC_KEY_LEN];
u8 precomputed_static_static[NOISE_PUBLIC_KEY_LEN];
u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN];
u8 hash[NOISE_HASH_LEN];
u8 chaining_key[NOISE_HASH_LEN];
u8 latest_timestamp[NOISE_TIMESTAMP_LEN];
__le32 remote_index;
/* Protects all members except the immutable (after noise_handshake_
* init): remote_static, precomputed_static_static, static_identity.
*/
struct rw_semaphore lock;
};
struct wg_device;
void wg_noise_init(void);
bool wg_noise_handshake_init(struct noise_handshake *handshake,
struct noise_static_identity *static_identity,
const u8 peer_public_key[NOISE_PUBLIC_KEY_LEN],
const u8 peer_preshared_key[NOISE_SYMMETRIC_KEY_LEN],
struct wg_peer *peer);
void wg_noise_handshake_clear(struct noise_handshake *handshake);
static inline void wg_noise_reset_last_sent_handshake(atomic64_t *handshake_ns)
{
atomic64_set(handshake_ns, ktime_get_coarse_boottime_ns() -
(u64)(REKEY_TIMEOUT + 1) * NSEC_PER_SEC);
}
void wg_noise_keypair_put(struct noise_keypair *keypair, bool unreference_now);
struct noise_keypair *wg_noise_keypair_get(struct noise_keypair *keypair);
void wg_noise_keypairs_clear(struct noise_keypairs *keypairs);
bool wg_noise_received_with_keypair(struct noise_keypairs *keypairs,
struct noise_keypair *received_keypair);
void wg_noise_expire_current_peer_keypairs(struct wg_peer *peer);
void wg_noise_set_static_identity_private_key(
struct noise_static_identity *static_identity,
const u8 private_key[NOISE_PUBLIC_KEY_LEN]);
bool wg_noise_precompute_static_static(struct wg_peer *peer);
bool
wg_noise_handshake_create_initiation(struct message_handshake_initiation *dst,
struct noise_handshake *handshake);
struct wg_peer *
wg_noise_handshake_consume_initiation(struct message_handshake_initiation *src,
struct wg_device *wg);
bool wg_noise_handshake_create_response(struct message_handshake_response *dst,
struct noise_handshake *handshake);
struct wg_peer *
wg_noise_handshake_consume_response(struct message_handshake_response *src,
struct wg_device *wg);
bool wg_noise_handshake_begin_session(struct noise_handshake *handshake,
struct noise_keypairs *keypairs);
#endif /* _WG_NOISE_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "peer.h"
#include "device.h"
#include "queueing.h"
#include "timers.h"
#include "peerlookup.h"
#include "noise.h"
#include <linux/kref.h>
#include <linux/lockdep.h>
#include <linux/rcupdate.h>
#include <linux/list.h>
static atomic64_t peer_counter = ATOMIC64_INIT(0);
struct wg_peer *wg_peer_create(struct wg_device *wg,
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN])
{
struct wg_peer *peer;
int ret = -ENOMEM;
lockdep_assert_held(&wg->device_update_lock);
if (wg->num_peers >= MAX_PEERS_PER_DEVICE)
return ERR_PTR(ret);
peer = kzalloc(sizeof(*peer), GFP_KERNEL);
if (unlikely(!peer))
return ERR_PTR(ret);
peer->device = wg;
if (!wg_noise_handshake_init(&peer->handshake, &wg->static_identity,
public_key, preshared_key, peer)) {
ret = -EKEYREJECTED;
goto err_1;
}
if (dst_cache_init(&peer->endpoint_cache, GFP_KERNEL))
goto err_1;
if (wg_packet_queue_init(&peer->tx_queue, wg_packet_tx_worker, false,
MAX_QUEUED_PACKETS))
goto err_2;
if (wg_packet_queue_init(&peer->rx_queue, NULL, false,
MAX_QUEUED_PACKETS))
goto err_3;
peer->internal_id = atomic64_inc_return(&peer_counter);
peer->serial_work_cpu = nr_cpumask_bits;
wg_cookie_init(&peer->latest_cookie);
wg_timers_init(peer);
wg_cookie_checker_precompute_peer_keys(peer);
spin_lock_init(&peer->keypairs.keypair_update_lock);
INIT_WORK(&peer->transmit_handshake_work,
wg_packet_handshake_send_worker);
rwlock_init(&peer->endpoint_lock);
kref_init(&peer->refcount);
skb_queue_head_init(&peer->staged_packet_queue);
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
set_bit(NAPI_STATE_NO_BUSY_POLL, &peer->napi.state);
netif_napi_add(wg->dev, &peer->napi, wg_packet_rx_poll,
NAPI_POLL_WEIGHT);
napi_enable(&peer->napi);
list_add_tail(&peer->peer_list, &wg->peer_list);
INIT_LIST_HEAD(&peer->allowedips_list);
wg_pubkey_hashtable_add(wg->peer_hashtable, peer);
++wg->num_peers;
pr_debug("%s: Peer %llu created\n", wg->dev->name, peer->internal_id);
return peer;
err_3:
wg_packet_queue_free(&peer->tx_queue, false);
err_2:
dst_cache_destroy(&peer->endpoint_cache);
err_1:
kfree(peer);
return ERR_PTR(ret);
}
struct wg_peer *wg_peer_get_maybe_zero(struct wg_peer *peer)
{
RCU_LOCKDEP_WARN(!rcu_read_lock_bh_held(),
"Taking peer reference without holding the RCU read lock");
if (unlikely(!peer || !kref_get_unless_zero(&peer->refcount)))
return NULL;
return peer;
}
static void peer_make_dead(struct wg_peer *peer)
{
/* Remove from configuration-time lookup structures. */
list_del_init(&peer->peer_list);
wg_allowedips_remove_by_peer(&peer->device->peer_allowedips, peer,
&peer->device->device_update_lock);
wg_pubkey_hashtable_remove(peer->device->peer_hashtable, peer);
/* Mark as dead, so that we don't allow jumping contexts after. */
WRITE_ONCE(peer->is_dead, true);
/* The caller must now synchronize_rcu() for this to take effect. */
}
static void peer_remove_after_dead(struct wg_peer *peer)
{
WARN_ON(!peer->is_dead);
/* No more keypairs can be created for this peer, since is_dead protects
* add_new_keypair, so we can now destroy existing ones.
*/
wg_noise_keypairs_clear(&peer->keypairs);
/* Destroy all ongoing timers that were in-flight at the beginning of
* this function.
*/
wg_timers_stop(peer);
/* The transition between packet encryption/decryption queues isn't
* guarded by is_dead, but each reference's life is strictly bounded by
* two generations: once for parallel crypto and once for serial
* ingestion, so we can simply flush twice, and be sure that we no
* longer have references inside these queues.
*/
/* a) For encrypt/decrypt. */
flush_workqueue(peer->device->packet_crypt_wq);
/* b.1) For send (but not receive, since that's napi). */
flush_workqueue(peer->device->packet_crypt_wq);
/* b.2.1) For receive (but not send, since that's wq). */
napi_disable(&peer->napi);
/* b.2.1) It's now safe to remove the napi struct, which must be done
* here from process context.
*/
netif_napi_del(&peer->napi);
/* Ensure any workstructs we own (like transmit_handshake_work or
* clear_peer_work) no longer are in use.
*/
flush_workqueue(peer->device->handshake_send_wq);
/* After the above flushes, a peer might still be active in a few
* different contexts: 1) from xmit(), before hitting is_dead and
* returning, 2) from wg_packet_consume_data(), before hitting is_dead
* and returning, 3) from wg_receive_handshake_packet() after a point
* where it has processed an incoming handshake packet, but where
* all calls to pass it off to timers fails because of is_dead. We won't
* have new references in (1) eventually, because we're removed from
* allowedips; we won't have new references in (2) eventually, because
* wg_index_hashtable_lookup will always return NULL, since we removed
* all existing keypairs and no more can be created; we won't have new
* references in (3) eventually, because we're removed from the pubkey
* hash table, which allows for a maximum of one handshake response,
* via the still-uncleared index hashtable entry, but not more than one,
* and in wg_cookie_message_consume, the lookup eventually gets a peer
* with a refcount of zero, so no new reference is taken.
*/
--peer->device->num_peers;
wg_peer_put(peer);
}
/* We have a separate "remove" function make sure that all active places where
* a peer is currently operating will eventually come to an end and not pass
* their reference onto another context.
*/
void wg_peer_remove(struct wg_peer *peer)
{
if (unlikely(!peer))
return;
lockdep_assert_held(&peer->device->device_update_lock);
peer_make_dead(peer);
synchronize_rcu();
peer_remove_after_dead(peer);
}
void wg_peer_remove_all(struct wg_device *wg)
{
struct wg_peer *peer, *temp;
LIST_HEAD(dead_peers);
lockdep_assert_held(&wg->device_update_lock);
/* Avoid having to traverse individually for each one. */
wg_allowedips_free(&wg->peer_allowedips, &wg->device_update_lock);
list_for_each_entry_safe(peer, temp, &wg->peer_list, peer_list) {
peer_make_dead(peer);
list_add_tail(&peer->peer_list, &dead_peers);
}
synchronize_rcu();
list_for_each_entry_safe(peer, temp, &dead_peers, peer_list)
peer_remove_after_dead(peer);
}
static void rcu_release(struct rcu_head *rcu)
{
struct wg_peer *peer = container_of(rcu, struct wg_peer, rcu);
dst_cache_destroy(&peer->endpoint_cache);
wg_packet_queue_free(&peer->rx_queue, false);
wg_packet_queue_free(&peer->tx_queue, false);
/* The final zeroing takes care of clearing any remaining handshake key
* material and other potentially sensitive information.
*/
kzfree(peer);
}
static void kref_release(struct kref *refcount)
{
struct wg_peer *peer = container_of(refcount, struct wg_peer, refcount);
pr_debug("%s: Peer %llu (%pISpfsc) destroyed\n",
peer->device->dev->name, peer->internal_id,
&peer->endpoint.addr);
/* Remove ourself from dynamic runtime lookup structures, now that the
* last reference is gone.
*/
wg_index_hashtable_remove(peer->device->index_hashtable,
&peer->handshake.entry);
/* Remove any lingering packets that didn't have a chance to be
* transmitted.
*/
wg_packet_purge_staged_packets(peer);
/* Free the memory used. */
call_rcu(&peer->rcu, rcu_release);
}
void wg_peer_put(struct wg_peer *peer)
{
if (unlikely(!peer))
return;
kref_put(&peer->refcount, kref_release);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_PEER_H
#define _WG_PEER_H
#include "device.h"
#include "noise.h"
#include "cookie.h"
#include <linux/types.h>
#include <linux/netfilter.h>
#include <linux/spinlock.h>
#include <linux/kref.h>
#include <net/dst_cache.h>
struct wg_device;
struct endpoint {
union {
struct sockaddr addr;
struct sockaddr_in addr4;
struct sockaddr_in6 addr6;
};
union {
struct {
struct in_addr src4;
/* Essentially the same as addr6->scope_id */
int src_if4;
};
struct in6_addr src6;
};
};
struct wg_peer {
struct wg_device *device;
struct crypt_queue tx_queue, rx_queue;
struct sk_buff_head staged_packet_queue;
int serial_work_cpu;
struct noise_keypairs keypairs;
struct endpoint endpoint;
struct dst_cache endpoint_cache;
rwlock_t endpoint_lock;
struct noise_handshake handshake;
atomic64_t last_sent_handshake;
struct work_struct transmit_handshake_work, clear_peer_work;
struct cookie latest_cookie;
struct hlist_node pubkey_hash;
u64 rx_bytes, tx_bytes;
struct timer_list timer_retransmit_handshake, timer_send_keepalive;
struct timer_list timer_new_handshake, timer_zero_key_material;
struct timer_list timer_persistent_keepalive;
unsigned int timer_handshake_attempts;
u16 persistent_keepalive_interval;
bool timer_need_another_keepalive;
bool sent_lastminute_handshake;
struct timespec64 walltime_last_handshake;
struct kref refcount;
struct rcu_head rcu;
struct list_head peer_list;
struct list_head allowedips_list;
u64 internal_id;
struct napi_struct napi;
bool is_dead;
};
struct wg_peer *wg_peer_create(struct wg_device *wg,
const u8 public_key[NOISE_PUBLIC_KEY_LEN],
const u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN]);
struct wg_peer *__must_check wg_peer_get_maybe_zero(struct wg_peer *peer);
static inline struct wg_peer *wg_peer_get(struct wg_peer *peer)
{
kref_get(&peer->refcount);
return peer;
}
void wg_peer_put(struct wg_peer *peer);
void wg_peer_remove(struct wg_peer *peer);
void wg_peer_remove_all(struct wg_device *wg);
#endif /* _WG_PEER_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "peerlookup.h"
#include "peer.h"
#include "noise.h"
static struct hlist_head *pubkey_bucket(struct pubkey_hashtable *table,
const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
{
/* siphash gives us a secure 64bit number based on a random key. Since
* the bits are uniformly distributed, we can then mask off to get the
* bits we need.
*/
const u64 hash = siphash(pubkey, NOISE_PUBLIC_KEY_LEN, &table->key);
return &table->hashtable[hash & (HASH_SIZE(table->hashtable) - 1)];
}
struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void)
{
struct pubkey_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
return NULL;
get_random_bytes(&table->key, sizeof(table->key));
hash_init(table->hashtable);
mutex_init(&table->lock);
return table;
}
void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
struct wg_peer *peer)
{
mutex_lock(&table->lock);
hlist_add_head_rcu(&peer->pubkey_hash,
pubkey_bucket(table, peer->handshake.remote_static));
mutex_unlock(&table->lock);
}
void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
struct wg_peer *peer)
{
mutex_lock(&table->lock);
hlist_del_init_rcu(&peer->pubkey_hash);
mutex_unlock(&table->lock);
}
/* Returns a strong reference to a peer */
struct wg_peer *
wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
const u8 pubkey[NOISE_PUBLIC_KEY_LEN])
{
struct wg_peer *iter_peer, *peer = NULL;
rcu_read_lock_bh();
hlist_for_each_entry_rcu_bh(iter_peer, pubkey_bucket(table, pubkey),
pubkey_hash) {
if (!memcmp(pubkey, iter_peer->handshake.remote_static,
NOISE_PUBLIC_KEY_LEN)) {
peer = iter_peer;
break;
}
}
peer = wg_peer_get_maybe_zero(peer);
rcu_read_unlock_bh();
return peer;
}
static struct hlist_head *index_bucket(struct index_hashtable *table,
const __le32 index)
{
/* Since the indices are random and thus all bits are uniformly
* distributed, we can find its bucket simply by masking.
*/
return &table->hashtable[(__force u32)index &
(HASH_SIZE(table->hashtable) - 1)];
}
struct index_hashtable *wg_index_hashtable_alloc(void)
{
struct index_hashtable *table = kvmalloc(sizeof(*table), GFP_KERNEL);
if (!table)
return NULL;
hash_init(table->hashtable);
spin_lock_init(&table->lock);
return table;
}
/* At the moment, we limit ourselves to 2^20 total peers, which generally might
* amount to 2^20*3 items in this hashtable. The algorithm below works by
* picking a random number and testing it. We can see that these limits mean we
* usually succeed pretty quickly:
*
* >>> def calculation(tries, size):
* ... return (size / 2**32)**(tries - 1) * (1 - (size / 2**32))
* ...
* >>> calculation(1, 2**20 * 3)
* 0.999267578125
* >>> calculation(2, 2**20 * 3)
* 0.0007318854331970215
* >>> calculation(3, 2**20 * 3)
* 5.360489012673497e-07
* >>> calculation(4, 2**20 * 3)
* 3.9261394135792216e-10
*
* At the moment, we don't do any masking, so this algorithm isn't exactly
* constant time in either the random guessing or in the hash list lookup. We
* could require a minimum of 3 tries, which would successfully mask the
* guessing. this would not, however, help with the growing hash lengths, which
* is another thing to consider moving forward.
*/
__le32 wg_index_hashtable_insert(struct index_hashtable *table,
struct index_hashtable_entry *entry)
{
struct index_hashtable_entry *existing_entry;
spin_lock_bh(&table->lock);
hlist_del_init_rcu(&entry->index_hash);
spin_unlock_bh(&table->lock);
rcu_read_lock_bh();
search_unused_slot:
/* First we try to find an unused slot, randomly, while unlocked. */
entry->index = (__force __le32)get_random_u32();
hlist_for_each_entry_rcu_bh(existing_entry,
index_bucket(table, entry->index),
index_hash) {
if (existing_entry->index == entry->index)
/* If it's already in use, we continue searching. */
goto search_unused_slot;
}
/* Once we've found an unused slot, we lock it, and then double-check
* that nobody else stole it from us.
*/
spin_lock_bh(&table->lock);
hlist_for_each_entry_rcu_bh(existing_entry,
index_bucket(table, entry->index),
index_hash) {
if (existing_entry->index == entry->index) {
spin_unlock_bh(&table->lock);
/* If it was stolen, we start over. */
goto search_unused_slot;
}
}
/* Otherwise, we know we have it exclusively (since we're locked),
* so we insert.
*/
hlist_add_head_rcu(&entry->index_hash,
index_bucket(table, entry->index));
spin_unlock_bh(&table->lock);
rcu_read_unlock_bh();
return entry->index;
}
bool wg_index_hashtable_replace(struct index_hashtable *table,
struct index_hashtable_entry *old,
struct index_hashtable_entry *new)
{
if (unlikely(hlist_unhashed(&old->index_hash)))
return false;
spin_lock_bh(&table->lock);
new->index = old->index;
hlist_replace_rcu(&old->index_hash, &new->index_hash);
/* Calling init here NULLs out index_hash, and in fact after this
* function returns, it's theoretically possible for this to get
* reinserted elsewhere. That means the RCU lookup below might either
* terminate early or jump between buckets, in which case the packet
* simply gets dropped, which isn't terrible.
*/
INIT_HLIST_NODE(&old->index_hash);
spin_unlock_bh(&table->lock);
return true;
}
void wg_index_hashtable_remove(struct index_hashtable *table,
struct index_hashtable_entry *entry)
{
spin_lock_bh(&table->lock);
hlist_del_init_rcu(&entry->index_hash);
spin_unlock_bh(&table->lock);
}
/* Returns a strong reference to a entry->peer */
struct index_hashtable_entry *
wg_index_hashtable_lookup(struct index_hashtable *table,
const enum index_hashtable_type type_mask,
const __le32 index, struct wg_peer **peer)
{
struct index_hashtable_entry *iter_entry, *entry = NULL;
rcu_read_lock_bh();
hlist_for_each_entry_rcu_bh(iter_entry, index_bucket(table, index),
index_hash) {
if (iter_entry->index == index) {
if (likely(iter_entry->type & type_mask))
entry = iter_entry;
break;
}
}
if (likely(entry)) {
entry->peer = wg_peer_get_maybe_zero(entry->peer);
if (likely(entry->peer))
*peer = entry->peer;
else
entry = NULL;
}
rcu_read_unlock_bh();
return entry;
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_PEERLOOKUP_H
#define _WG_PEERLOOKUP_H
#include "messages.h"
#include <linux/hashtable.h>
#include <linux/mutex.h>
#include <linux/siphash.h>
struct wg_peer;
struct pubkey_hashtable {
/* TODO: move to rhashtable */
DECLARE_HASHTABLE(hashtable, 11);
siphash_key_t key;
struct mutex lock;
};
struct pubkey_hashtable *wg_pubkey_hashtable_alloc(void);
void wg_pubkey_hashtable_add(struct pubkey_hashtable *table,
struct wg_peer *peer);
void wg_pubkey_hashtable_remove(struct pubkey_hashtable *table,
struct wg_peer *peer);
struct wg_peer *
wg_pubkey_hashtable_lookup(struct pubkey_hashtable *table,
const u8 pubkey[NOISE_PUBLIC_KEY_LEN]);
struct index_hashtable {
/* TODO: move to rhashtable */
DECLARE_HASHTABLE(hashtable, 13);
spinlock_t lock;
};
enum index_hashtable_type {
INDEX_HASHTABLE_HANDSHAKE = 1U << 0,
INDEX_HASHTABLE_KEYPAIR = 1U << 1
};
struct index_hashtable_entry {
struct wg_peer *peer;
struct hlist_node index_hash;
enum index_hashtable_type type;
__le32 index;
};
struct index_hashtable *wg_index_hashtable_alloc(void);
__le32 wg_index_hashtable_insert(struct index_hashtable *table,
struct index_hashtable_entry *entry);
bool wg_index_hashtable_replace(struct index_hashtable *table,
struct index_hashtable_entry *old,
struct index_hashtable_entry *new);
void wg_index_hashtable_remove(struct index_hashtable *table,
struct index_hashtable_entry *entry);
struct index_hashtable_entry *
wg_index_hashtable_lookup(struct index_hashtable *table,
const enum index_hashtable_type type_mask,
const __le32 index, struct wg_peer **peer);
#endif /* _WG_PEERLOOKUP_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "queueing.h"
struct multicore_worker __percpu *
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr)
{
int cpu;
struct multicore_worker __percpu *worker =
alloc_percpu(struct multicore_worker);
if (!worker)
return NULL;
for_each_possible_cpu(cpu) {
per_cpu_ptr(worker, cpu)->ptr = ptr;
INIT_WORK(&per_cpu_ptr(worker, cpu)->work, function);
}
return worker;
}
int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
bool multicore, unsigned int len)
{
int ret;
memset(queue, 0, sizeof(*queue));
ret = ptr_ring_init(&queue->ring, len, GFP_KERNEL);
if (ret)
return ret;
if (function) {
if (multicore) {
queue->worker = wg_packet_percpu_multicore_worker_alloc(
function, queue);
if (!queue->worker)
return -ENOMEM;
} else {
INIT_WORK(&queue->work, function);
}
}
return 0;
}
void wg_packet_queue_free(struct crypt_queue *queue, bool multicore)
{
if (multicore)
free_percpu(queue->worker);
WARN_ON(!__ptr_ring_empty(&queue->ring));
ptr_ring_cleanup(&queue->ring, NULL);
}
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_QUEUEING_H
#define _WG_QUEUEING_H
#include "peer.h"
#include <linux/types.h>
#include <linux/skbuff.h>
#include <linux/ip.h>
#include <linux/ipv6.h>
struct wg_device;
struct wg_peer;
struct multicore_worker;
struct crypt_queue;
struct sk_buff;
/* queueing.c APIs: */
int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
bool multicore, unsigned int len);
void wg_packet_queue_free(struct crypt_queue *queue, bool multicore);
struct multicore_worker __percpu *
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);
/* receive.c APIs: */
void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb);
void wg_packet_handshake_receive_worker(struct work_struct *work);
/* NAPI poll function: */
int wg_packet_rx_poll(struct napi_struct *napi, int budget);
/* Workqueue worker: */
void wg_packet_decrypt_worker(struct work_struct *work);
/* send.c APIs: */
void wg_packet_send_queued_handshake_initiation(struct wg_peer *peer,
bool is_retry);
void wg_packet_send_handshake_response(struct wg_peer *peer);
void wg_packet_send_handshake_cookie(struct wg_device *wg,
struct sk_buff *initiating_skb,
__le32 sender_index);
void wg_packet_send_keepalive(struct wg_peer *peer);
void wg_packet_purge_staged_packets(struct wg_peer *peer);
void wg_packet_send_staged_packets(struct wg_peer *peer);
/* Workqueue workers: */
void wg_packet_handshake_send_worker(struct work_struct *work);
void wg_packet_tx_worker(struct work_struct *work);
void wg_packet_encrypt_worker(struct work_struct *work);
enum packet_state {
PACKET_STATE_UNCRYPTED,
PACKET_STATE_CRYPTED,
PACKET_STATE_DEAD
};
struct packet_cb {
u64 nonce;
struct noise_keypair *keypair;
atomic_t state;
u32 mtu;
u8 ds;
};
#define PACKET_CB(skb) ((struct packet_cb *)((skb)->cb))
#define PACKET_PEER(skb) (PACKET_CB(skb)->keypair->entry.peer)
/* Returns either the correct skb->protocol value, or 0 if invalid. */
static inline __be16 wg_skb_examine_untrusted_ip_hdr(struct sk_buff *skb)
{
if (skb_network_header(skb) >= skb->head &&
(skb_network_header(skb) + sizeof(struct iphdr)) <=
skb_tail_pointer(skb) &&
ip_hdr(skb)->version == 4)
return htons(ETH_P_IP);
if (skb_network_header(skb) >= skb->head &&
(skb_network_header(skb) + sizeof(struct ipv6hdr)) <=
skb_tail_pointer(skb) &&
ipv6_hdr(skb)->version == 6)
return htons(ETH_P_IPV6);
return 0;
}
static inline void wg_reset_packet(struct sk_buff *skb)
{
const int pfmemalloc = skb->pfmemalloc;
skb_scrub_packet(skb, true);
memset(&skb->headers_start, 0,
offsetof(struct sk_buff, headers_end) -
offsetof(struct sk_buff, headers_start));
skb->pfmemalloc = pfmemalloc;
skb->queue_mapping = 0;
skb->nohdr = 0;
skb->peeked = 0;
skb->mac_len = 0;
skb->dev = NULL;
#ifdef CONFIG_NET_SCHED
skb->tc_index = 0;
skb_reset_tc(skb);
#endif
skb->hdr_len = skb_headroom(skb);
skb_reset_mac_header(skb);
skb_reset_network_header(skb);
skb_reset_transport_header(skb);
skb_probe_transport_header(skb);
skb_reset_inner_headers(skb);
}
static inline int wg_cpumask_choose_online(int *stored_cpu, unsigned int id)
{
unsigned int cpu = *stored_cpu, cpu_index, i;
if (unlikely(cpu == nr_cpumask_bits ||
!cpumask_test_cpu(cpu, cpu_online_mask))) {
cpu_index = id % cpumask_weight(cpu_online_mask);
cpu = cpumask_first(cpu_online_mask);
for (i = 0; i < cpu_index; ++i)
cpu = cpumask_next(cpu, cpu_online_mask);
*stored_cpu = cpu;
}
return cpu;
}
/* This function is racy, in the sense that next is unlocked, so it could return
* the same CPU twice. A race-free version of this would be to instead store an
* atomic sequence number, do an increment-and-return, and then iterate through
* every possible CPU until we get to that index -- choose_cpu. However that's
* a bit slower, and it doesn't seem like this potential race actually
* introduces any performance loss, so we live with it.
*/
static inline int wg_cpumask_next_online(int *next)
{
int cpu = *next;
while (unlikely(!cpumask_test_cpu(cpu, cpu_online_mask)))
cpu = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
*next = cpumask_next(cpu, cpu_online_mask) % nr_cpumask_bits;
return cpu;
}
static inline int wg_queue_enqueue_per_device_and_peer(
struct crypt_queue *device_queue, struct crypt_queue *peer_queue,
struct sk_buff *skb, struct workqueue_struct *wq, int *next_cpu)
{
int cpu;
atomic_set_release(&PACKET_CB(skb)->state, PACKET_STATE_UNCRYPTED);
/* We first queue this up for the peer ingestion, but the consumer
* will wait for the state to change to CRYPTED or DEAD before.
*/
if (unlikely(ptr_ring_produce_bh(&peer_queue->ring, skb)))
return -ENOSPC;
/* Then we queue it up in the device queue, which consumes the
* packet as soon as it can.
*/
cpu = wg_cpumask_next_online(next_cpu);
if (unlikely(ptr_ring_produce_bh(&device_queue->ring, skb)))
return -EPIPE;
queue_work_on(cpu, wq, &per_cpu_ptr(device_queue->worker, cpu)->work);
return 0;
}
static inline void wg_queue_enqueue_per_peer(struct crypt_queue *queue,
struct sk_buff *skb,
enum packet_state state)
{
/* We take a reference, because as soon as we call atomic_set, the
* peer can be freed from below us.
*/
struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
atomic_set_release(&PACKET_CB(skb)->state, state);
queue_work_on(wg_cpumask_choose_online(&peer->serial_work_cpu,
peer->internal_id),
peer->device->packet_crypt_wq, &queue->work);
wg_peer_put(peer);
}
static inline void wg_queue_enqueue_per_peer_napi(struct sk_buff *skb,
enum packet_state state)
{
/* We take a reference, because as soon as we call atomic_set, the
* peer can be freed from below us.
*/
struct wg_peer *peer = wg_peer_get(PACKET_PEER(skb));
atomic_set_release(&PACKET_CB(skb)->state, state);
napi_schedule(&peer->napi);
wg_peer_put(peer);
}
#ifdef DEBUG
bool wg_packet_counter_selftest(void);
#endif
#endif /* _WG_QUEUEING_H */
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#include "ratelimiter.h"
#include <linux/siphash.h>
#include <linux/mm.h>
#include <linux/slab.h>
#include <net/ip.h>
static struct kmem_cache *entry_cache;
static hsiphash_key_t key;
static spinlock_t table_lock = __SPIN_LOCK_UNLOCKED("ratelimiter_table_lock");
static DEFINE_MUTEX(init_lock);
static u64 init_refcnt; /* Protected by init_lock, hence not atomic. */
static atomic_t total_entries = ATOMIC_INIT(0);
static unsigned int max_entries, table_size;
static void wg_ratelimiter_gc_entries(struct work_struct *);
static DECLARE_DEFERRABLE_WORK(gc_work, wg_ratelimiter_gc_entries);
static struct hlist_head *table_v4;
#if IS_ENABLED(CONFIG_IPV6)
static struct hlist_head *table_v6;
#endif
struct ratelimiter_entry {
u64 last_time_ns, tokens, ip;
void *net;
spinlock_t lock;
struct hlist_node hash;
struct rcu_head rcu;
};
enum {
PACKETS_PER_SECOND = 20,
PACKETS_BURSTABLE = 5,
PACKET_COST = NSEC_PER_SEC / PACKETS_PER_SECOND,
TOKEN_MAX = PACKET_COST * PACKETS_BURSTABLE
};
static void entry_free(struct rcu_head *rcu)
{
kmem_cache_free(entry_cache,
container_of(rcu, struct ratelimiter_entry, rcu));
atomic_dec(&total_entries);
}
static void entry_uninit(struct ratelimiter_entry *entry)
{
hlist_del_rcu(&entry->hash);
call_rcu(&entry->rcu, entry_free);
}
/* Calling this function with a NULL work uninits all entries. */
static void wg_ratelimiter_gc_entries(struct work_struct *work)
{
const u64 now = ktime_get_coarse_boottime_ns();
struct ratelimiter_entry *entry;
struct hlist_node *temp;
unsigned int i;
for (i = 0; i < table_size; ++i) {
spin_lock(&table_lock);
hlist_for_each_entry_safe(entry, temp, &table_v4[i], hash) {
if (unlikely(!work) ||
now - entry->last_time_ns > NSEC_PER_SEC)
entry_uninit(entry);
}
#if IS_ENABLED(CONFIG_IPV6)
hlist_for_each_entry_safe(entry, temp, &table_v6[i], hash) {
if (unlikely(!work) ||
now - entry->last_time_ns > NSEC_PER_SEC)
entry_uninit(entry);
}
#endif
spin_unlock(&table_lock);
if (likely(work))
cond_resched();
}
if (likely(work))
queue_delayed_work(system_power_efficient_wq, &gc_work, HZ);
}
bool wg_ratelimiter_allow(struct sk_buff *skb, struct net *net)
{
/* We only take the bottom half of the net pointer, so that we can hash
* 3 words in the end. This way, siphash's len param fits into the final
* u32, and we don't incur an extra round.
*/
const u32 net_word = (unsigned long)net;
struct ratelimiter_entry *entry;
struct hlist_head *bucket;
u64 ip;
if (skb->protocol == htons(ETH_P_IP)) {
ip = (u64 __force)ip_hdr(skb)->saddr;
bucket = &table_v4[hsiphash_2u32(net_word, ip, &key) &
(table_size - 1)];
}
#if IS_ENABLED(CONFIG_IPV6)
else if (skb->protocol == htons(ETH_P_IPV6)) {
/* Only use 64 bits, so as to ratelimit the whole /64. */
memcpy(&ip, &ipv6_hdr(skb)->saddr, sizeof(ip));
bucket = &table_v6[hsiphash_3u32(net_word, ip >> 32, ip, &key) &
(table_size - 1)];
}
#endif
else
return false;
rcu_read_lock();
hlist_for_each_entry_rcu(entry, bucket, hash) {
if (entry->net == net && entry->ip == ip) {
u64 now, tokens;
bool ret;
/* Quasi-inspired by nft_limit.c, but this is actually a
* slightly different algorithm. Namely, we incorporate
* the burst as part of the maximum tokens, rather than
* as part of the rate.
*/
spin_lock(&entry->lock);
now = ktime_get_coarse_boottime_ns();
tokens = min_t(u64, TOKEN_MAX,
entry->tokens + now -
entry->last_time_ns);
entry->last_time_ns = now;
ret = tokens >= PACKET_COST;
entry->tokens = ret ? tokens - PACKET_COST : tokens;
spin_unlock(&entry->lock);
rcu_read_unlock();
return ret;
}
}
rcu_read_unlock();
if (atomic_inc_return(&total_entries) > max_entries)
goto err_oom;
entry = kmem_cache_alloc(entry_cache, GFP_KERNEL);
if (unlikely(!entry))
goto err_oom;
entry->net = net;
entry->ip = ip;
INIT_HLIST_NODE(&entry->hash);
spin_lock_init(&entry->lock);
entry->last_time_ns = ktime_get_coarse_boottime_ns();
entry->tokens = TOKEN_MAX - PACKET_COST;
spin_lock(&table_lock);
hlist_add_head_rcu(&entry->hash, bucket);
spin_unlock(&table_lock);
return true;
err_oom:
atomic_dec(&total_entries);
return false;
}
int wg_ratelimiter_init(void)
{
mutex_lock(&init_lock);
if (++init_refcnt != 1)
goto out;
entry_cache = KMEM_CACHE(ratelimiter_entry, 0);
if (!entry_cache)
goto err;
/* xt_hashlimit.c uses a slightly different algorithm for ratelimiting,
* but what it shares in common is that it uses a massive hashtable. So,
* we borrow their wisdom about good table sizes on different systems
* dependent on RAM. This calculation here comes from there.
*/
table_size = (totalram_pages() > (1U << 30) / PAGE_SIZE) ? 8192 :
max_t(unsigned long, 16, roundup_pow_of_two(
(totalram_pages() << PAGE_SHIFT) /
(1U << 14) / sizeof(struct hlist_head)));
max_entries = table_size * 8;
table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL);
if (unlikely(!table_v4))
goto err_kmemcache;
#if IS_ENABLED(CONFIG_IPV6)
table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL);
if (unlikely(!table_v6)) {
kvfree(table_v4);
goto err_kmemcache;
}
#endif
queue_delayed_work(system_power_efficient_wq, &gc_work, HZ);
get_random_bytes(&key, sizeof(key));
out:
mutex_unlock(&init_lock);
return 0;
err_kmemcache:
kmem_cache_destroy(entry_cache);
err:
--init_refcnt;
mutex_unlock(&init_lock);
return -ENOMEM;
}
void wg_ratelimiter_uninit(void)
{
mutex_lock(&init_lock);
if (!init_refcnt || --init_refcnt)
goto out;
cancel_delayed_work_sync(&gc_work);
wg_ratelimiter_gc_entries(NULL);
rcu_barrier();
kvfree(table_v4);
#if IS_ENABLED(CONFIG_IPV6)
kvfree(table_v6);
#endif
kmem_cache_destroy(entry_cache);
out:
mutex_unlock(&init_lock);
}
#include "selftest/ratelimiter.c"
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_RATELIMITER_H
#define _WG_RATELIMITER_H
#include <linux/skbuff.h>
int wg_ratelimiter_init(void);
void wg_ratelimiter_uninit(void);
bool wg_ratelimiter_allow(struct sk_buff *skb, struct net *net);
#ifdef DEBUG
bool wg_ratelimiter_selftest(void);
#endif
#endif /* _WG_RATELIMITER_H */
This diff is collapsed.
This diff is collapsed.
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifdef DEBUG
bool __init wg_packet_counter_selftest(void)
{
unsigned int test_num = 0, i;
union noise_counter counter;
bool success = true;
#define T_INIT do { \
memset(&counter, 0, sizeof(union noise_counter)); \
spin_lock_init(&counter.receive.lock); \
} while (0)
#define T_LIM (COUNTER_WINDOW_SIZE + 1)
#define T(n, v) do { \
++test_num; \
if (counter_validate(&counter, n) != (v)) { \
pr_err("nonce counter self-test %u: FAIL\n", \
test_num); \
success = false; \
} \
} while (0)
T_INIT;
/* 1 */ T(0, true);
/* 2 */ T(1, true);
/* 3 */ T(1, false);
/* 4 */ T(9, true);
/* 5 */ T(8, true);
/* 6 */ T(7, true);
/* 7 */ T(7, false);
/* 8 */ T(T_LIM, true);
/* 9 */ T(T_LIM - 1, true);
/* 10 */ T(T_LIM - 1, false);
/* 11 */ T(T_LIM - 2, true);
/* 12 */ T(2, true);
/* 13 */ T(2, false);
/* 14 */ T(T_LIM + 16, true);
/* 15 */ T(3, false);
/* 16 */ T(T_LIM + 16, false);
/* 17 */ T(T_LIM * 4, true);
/* 18 */ T(T_LIM * 4 - (T_LIM - 1), true);
/* 19 */ T(10, false);
/* 20 */ T(T_LIM * 4 - T_LIM, false);
/* 21 */ T(T_LIM * 4 - (T_LIM + 1), false);
/* 22 */ T(T_LIM * 4 - (T_LIM - 2), true);
/* 23 */ T(T_LIM * 4 + 1 - T_LIM, false);
/* 24 */ T(0, false);
/* 25 */ T(REJECT_AFTER_MESSAGES, false);
/* 26 */ T(REJECT_AFTER_MESSAGES - 1, true);
/* 27 */ T(REJECT_AFTER_MESSAGES, false);
/* 28 */ T(REJECT_AFTER_MESSAGES - 1, false);
/* 29 */ T(REJECT_AFTER_MESSAGES - 2, true);
/* 30 */ T(REJECT_AFTER_MESSAGES + 1, false);
/* 31 */ T(REJECT_AFTER_MESSAGES + 2, false);
/* 32 */ T(REJECT_AFTER_MESSAGES - 2, false);
/* 33 */ T(REJECT_AFTER_MESSAGES - 3, true);
/* 34 */ T(0, false);
T_INIT;
for (i = 1; i <= COUNTER_WINDOW_SIZE; ++i)
T(i, true);
T(0, true);
T(0, false);
T_INIT;
for (i = 2; i <= COUNTER_WINDOW_SIZE + 1; ++i)
T(i, true);
T(1, true);
T(0, false);
T_INIT;
for (i = COUNTER_WINDOW_SIZE + 1; i-- > 0;)
T(i, true);
T_INIT;
for (i = COUNTER_WINDOW_SIZE + 2; i-- > 1;)
T(i, true);
T(0, false);
T_INIT;
for (i = COUNTER_WINDOW_SIZE + 1; i-- > 1;)
T(i, true);
T(COUNTER_WINDOW_SIZE + 1, true);
T(0, false);
T_INIT;
for (i = COUNTER_WINDOW_SIZE + 1; i-- > 1;)
T(i, true);
T(0, true);
T(COUNTER_WINDOW_SIZE + 1, true);
#undef T
#undef T_LIM
#undef T_INIT
if (success)
pr_info("nonce counter self-tests: pass\n");
return success;
}
#endif
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
*/
#ifndef _WG_TIMERS_H
#define _WG_TIMERS_H
#include <linux/ktime.h>
struct wg_peer;
void wg_timers_init(struct wg_peer *peer);
void wg_timers_stop(struct wg_peer *peer);
void wg_timers_data_sent(struct wg_peer *peer);
void wg_timers_data_received(struct wg_peer *peer);
void wg_timers_any_authenticated_packet_sent(struct wg_peer *peer);
void wg_timers_any_authenticated_packet_received(struct wg_peer *peer);
void wg_timers_handshake_initiated(struct wg_peer *peer);
void wg_timers_handshake_complete(struct wg_peer *peer);
void wg_timers_session_derived(struct wg_peer *peer);
void wg_timers_any_authenticated_packet_traversal(struct wg_peer *peer);
static inline bool wg_birthdate_has_expired(u64 birthday_nanoseconds,
u64 expiration_seconds)
{
return (s64)(birthday_nanoseconds + expiration_seconds * NSEC_PER_SEC)
<= (s64)ktime_get_coarse_boottime_ns();
}
#endif /* _WG_TIMERS_H */
#define WIREGUARD_VERSION "1.0.0"
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment