Commit dd8b3a80 authored by Jakub Kicinski's avatar Jakub Kicinski

Merge tag 'ipsec-next-2022-12-09' of...

Merge tag 'ipsec-next-2022-12-09' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next

Steffen Klassert says:

====================
ipsec-next 2022-12-09

1) Add xfrm packet offload core API.
   From Leon Romanovsky.

2) Add xfrm packet offload support for mlx5.
   From Leon Romanovsky and Raed Salem.

3) Fix a typto in a error message.
   From Colin Ian King.

* tag 'ipsec-next-2022-12-09' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next: (38 commits)
  xfrm: Fix spelling mistake "oflload" -> "offload"
  net/mlx5e: Open mlx5 driver to accept IPsec packet offload
  net/mlx5e: Handle ESN update events
  net/mlx5e: Handle hardware IPsec limits events
  net/mlx5e: Update IPsec soft and hard limits
  net/mlx5e: Store all XFRM SAs in Xarray
  net/mlx5e: Provide intermediate pointer to access IPsec struct
  net/mlx5e: Skip IPsec encryption for TX path without matching policy
  net/mlx5e: Add statistics for Rx/Tx IPsec offloaded flows
  net/mlx5e: Improve IPsec flow steering autogroup
  net/mlx5e: Configure IPsec packet offload flow steering
  net/mlx5e: Use same coding pattern for Rx and Tx flows
  net/mlx5e: Add XFRM policy offload logic
  net/mlx5e: Create IPsec policy offload tables
  net/mlx5e: Generalize creation of default IPsec miss group and rule
  net/mlx5e: Group IPsec miss handles into separate struct
  net/mlx5e: Make clear what IPsec rx_err does
  net/mlx5e: Flatten the IPsec RX add rule path
  net/mlx5e: Refactor FTE setup code to be more clear
  net/mlx5e: Move IPsec flow table creation to separate function
  ...
====================

Link: https://lore.kernel.org/r/20221209093310.4018731-1-steffen.klassert@secunet.comSigned-off-by: default avatarJakub Kicinski <kuba@kernel.org>
parents 5fc11a40 abe2343d
......@@ -5,6 +5,7 @@ XFRM device - offloading the IPsec computations
===============================================
Shannon Nelson <shannon.nelson@oracle.com>
Leon Romanovsky <leonro@nvidia.com>
Overview
......@@ -18,10 +19,21 @@ can radically increase throughput and decrease CPU utilization. The XFRM
Device interface allows NIC drivers to offer to the stack access to the
hardware offload.
Right now, there are two types of hardware offload that kernel supports.
* IPsec crypto offload:
* NIC performs encrypt/decrypt
* Kernel does everything else
* IPsec packet offload:
* NIC performs encrypt/decrypt
* NIC does encapsulation
* Kernel and NIC have SA and policy in-sync
* NIC handles the SA and policies states
* The Kernel talks to the keymanager
Userland access to the offload is typically through a system such as
libreswan or KAME/raccoon, but the iproute2 'ip xfrm' command set can
be handy when experimenting. An example command might look something
like this::
like this for crypto offload:
ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \
reqid 0x07 replay-window 32 \
......@@ -29,6 +41,17 @@ like this::
sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \
offload dev eth4 dir in
and for packet offload
ip x s add proto esp dst 14.0.0.70 src 14.0.0.52 spi 0x07 mode transport \
reqid 0x07 replay-window 32 \
aead 'rfc4106(gcm(aes))' 0x44434241343332312423222114131211f4f3f2f1 128 \
sel src 14.0.0.52/24 dst 14.0.0.70/24 proto tcp \
offload packet dev eth4 dir in
ip x p add src 14.0.0.70 dst 14.0.0.52 offload packet dev eth4 dir in
tmpl src 14.0.0.70 dst 14.0.0.52 proto esp reqid 10000 mode transport
Yes, that's ugly, but that's what shell scripts and/or libreswan are for.
......@@ -40,17 +63,24 @@ Callbacks to implement
/* from include/linux/netdevice.h */
struct xfrmdev_ops {
/* Crypto and Packet offload callbacks */
int (*xdo_dev_state_add) (struct xfrm_state *x);
void (*xdo_dev_state_delete) (struct xfrm_state *x);
void (*xdo_dev_state_free) (struct xfrm_state *x);
bool (*xdo_dev_offload_ok) (struct sk_buff *skb,
struct xfrm_state *x);
void (*xdo_dev_state_advance_esn) (struct xfrm_state *x);
/* Solely packet offload callbacks */
void (*xdo_dev_state_update_curlft) (struct xfrm_state *x);
int (*xdo_dev_policy_add) (struct xfrm_policy *x);
void (*xdo_dev_policy_delete) (struct xfrm_policy *x);
void (*xdo_dev_policy_free) (struct xfrm_policy *x);
};
The NIC driver offering ipsec offload will need to implement these
callbacks to make the offload available to the network stack's
XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and
The NIC driver offering ipsec offload will need to implement callbacks
relevant to supported offload to make the offload available to the network
stack's XFRM subsystem. Additionally, the feature bits NETIF_F_HW_ESP and
NETIF_F_HW_ESP_TX_CSUM will signal the availability of the offload.
......@@ -79,7 +109,8 @@ and an indication of whether it is for Rx or Tx. The driver should
=========== ===================================
0 success
-EOPNETSUPP offload not supported, try SW IPsec
-EOPNETSUPP offload not supported, try SW IPsec,
not applicable for packet offload mode
other fail the request
=========== ===================================
......@@ -96,6 +127,7 @@ will serviceable. This can check the packet information to be sure the
offload can be supported (e.g. IPv4 or IPv6, no IPv4 options, etc) and
return true of false to signify its support.
Crypto offload mode:
When ready to send, the driver needs to inspect the Tx packet for the
offload information, including the opaque context, and set up the packet
send accordingly::
......@@ -139,13 +171,25 @@ the stack in xfrm_input().
In ESN mode, xdo_dev_state_advance_esn() is called from xfrm_replay_advance_esn().
Driver will check packet seq number and update HW ESN state machine if needed.
Packet offload mode:
HW adds and deletes XFRM headers. So in RX path, XFRM stack is bypassed if HW
reported success. In TX path, the packet lefts kernel without extra header
and not encrypted, the HW is responsible to perform it.
When the SA is removed by the user, the driver's xdo_dev_state_delete()
is asked to disable the offload. Later, xdo_dev_state_free() is called
from a garbage collection routine after all reference counts to the state
and xdo_dev_policy_delete() are asked to disable the offload. Later,
xdo_dev_state_free() and xdo_dev_policy_free() are called from a garbage
collection routine after all reference counts to the state and policy
have been removed and any remaining resources can be cleared for the
offload state. How these are used by the driver will depend on specific
hardware needs.
As a netdev is set to DOWN the XFRM stack's netdev listener will call
xdo_dev_state_delete() and xdo_dev_state_free() on any remaining offloaded
states.
xdo_dev_state_delete(), xdo_dev_policy_delete(), xdo_dev_state_free() and
xdo_dev_policy_free() on any remaining offloaded states.
Outcome of HW handling packets, the XFRM core can't count hard, soft limits.
The HW/driver are responsible to perform it and provide accurate data when
xdo_dev_state_update_curlft() is called. In case of one of these limits
occuried, the driver needs to call to xfrm_state_check_expire() to make sure
that XFRM performs rekeying sequence.
......@@ -283,6 +283,10 @@ static int ch_ipsec_xfrm_add_state(struct xfrm_state *x)
pr_debug("Cannot offload xfrm states with geniv other than seqiv\n");
return -EINVAL;
}
if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
pr_debug("Unsupported xfrm offload\n");
return -EINVAL;
}
sa_entry = kzalloc(sizeof(*sa_entry), GFP_KERNEL);
if (!sa_entry) {
......
......@@ -585,6 +585,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
netdev_err(dev, "Unsupported ipsec offload type\n");
return -EINVAL;
}
if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
struct rx_sa rsa;
......
......@@ -280,6 +280,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
netdev_err(dev, "Unsupported ipsec offload type\n");
return -EINVAL;
}
if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
struct rx_sa rsa;
......
......@@ -1245,4 +1245,5 @@ int mlx5e_set_vf_rate(struct net_device *dev, int vf, int min_tx_rate, int max_t
int mlx5e_get_vf_config(struct net_device *dev, int vf, struct ifla_vf_info *ivi);
int mlx5e_get_vf_stats(struct net_device *dev, int vf, struct ifla_vf_stats *vf_stats);
#endif
int mlx5e_create_mkey(struct mlx5_core_dev *mdev, u32 pdn, u32 *mkey);
#endif /* __MLX5_EN_H__ */
......@@ -84,7 +84,8 @@ enum {
MLX5E_ARFS_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1,
#endif
#ifdef CONFIG_MLX5_EN_IPSEC
MLX5E_ACCEL_FS_ESP_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1,
MLX5E_ACCEL_FS_POL_FT_LEVEL = MLX5E_INNER_TTC_FT_LEVEL + 1,
MLX5E_ACCEL_FS_ESP_FT_LEVEL,
MLX5E_ACCEL_FS_ESP_FT_ERR_LEVEL,
#endif
};
......
......@@ -162,7 +162,6 @@ mlx5e_tc_meter_modify(struct mlx5_core_dev *mdev,
MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER);
aso_ctrl = &aso_wqe->aso_ctrl;
memset(aso_ctrl, 0, sizeof(*aso_ctrl));
aso_ctrl->data_mask_mode = MLX5_ASO_DATA_MASK_MODE_BYTEWISE_64BYTE << 6;
aso_ctrl->condition_1_0_operand = MLX5_ASO_ALWAYS_TRUE |
MLX5_ASO_ALWAYS_TRUE << 4;
......
......@@ -34,27 +34,14 @@
#ifndef __MLX5E_IPSEC_H__
#define __MLX5E_IPSEC_H__
#ifdef CONFIG_MLX5_EN_IPSEC
#include <linux/mlx5/device.h>
#include <net/xfrm.h>
#include <linux/idr.h>
#include "lib/aso.h"
#define MLX5E_IPSEC_SADB_RX_BITS 10
#define MLX5E_IPSEC_ESN_SCOPE_MID 0x80000000L
enum mlx5_accel_esp_flags {
MLX5_ACCEL_ESP_FLAGS_TUNNEL = 0, /* Default */
MLX5_ACCEL_ESP_FLAGS_TRANSPORT = 1UL << 0,
MLX5_ACCEL_ESP_FLAGS_ESN_TRIGGERED = 1UL << 1,
MLX5_ACCEL_ESP_FLAGS_ESN_STATE_OVERLAP = 1UL << 2,
};
enum mlx5_accel_esp_action {
MLX5_ACCEL_ESP_ACTION_DECRYPT,
MLX5_ACCEL_ESP_ACTION_ENCRYPT,
};
struct aes_gcm_keymat {
u64 seq_iv;
......@@ -66,7 +53,6 @@ struct aes_gcm_keymat {
};
struct mlx5_accel_esp_xfrm_attrs {
enum mlx5_accel_esp_action action;
u32 esn;
u32 spi;
u32 flags;
......@@ -82,16 +68,37 @@ struct mlx5_accel_esp_xfrm_attrs {
__be32 a6[4];
} daddr;
u8 is_ipv6;
u8 dir : 2;
u8 esn_overlap : 1;
u8 esn_trigger : 1;
u8 type : 2;
u8 family;
u32 replay_window;
u32 authsize;
u32 reqid;
u64 hard_packet_limit;
u64 soft_packet_limit;
};
enum mlx5_ipsec_cap {
MLX5_IPSEC_CAP_CRYPTO = 1 << 0,
MLX5_IPSEC_CAP_ESN = 1 << 1,
MLX5_IPSEC_CAP_PACKET_OFFLOAD = 1 << 2,
};
struct mlx5e_priv;
struct mlx5e_ipsec_hw_stats {
u64 ipsec_rx_pkts;
u64 ipsec_rx_bytes;
u64 ipsec_rx_drop_pkts;
u64 ipsec_rx_drop_bytes;
u64 ipsec_tx_pkts;
u64 ipsec_tx_bytes;
u64 ipsec_tx_drop_pkts;
u64 ipsec_tx_drop_bytes;
};
struct mlx5e_ipsec_sw_stats {
atomic64_t ipsec_rx_drop_sp_alloc;
atomic64_t ipsec_rx_drop_sadb_miss;
......@@ -102,17 +109,38 @@ struct mlx5e_ipsec_sw_stats {
atomic64_t ipsec_tx_drop_trailer;
};
struct mlx5e_accel_fs_esp;
struct mlx5e_ipsec_rx;
struct mlx5e_ipsec_tx;
struct mlx5e_ipsec_work {
struct work_struct work;
struct mlx5e_ipsec *ipsec;
u32 id;
};
struct mlx5e_ipsec_aso {
u8 ctx[MLX5_ST_SZ_BYTES(ipsec_aso)];
dma_addr_t dma_addr;
struct mlx5_aso *aso;
/* IPsec ASO caches data on every query call,
* so in nested calls, we can use this boolean to save
* recursive calls to mlx5e_ipsec_aso_query()
*/
u8 use_cache : 1;
};
struct mlx5e_ipsec {
struct mlx5_core_dev *mdev;
DECLARE_HASHTABLE(sadb_rx, MLX5E_IPSEC_SADB_RX_BITS);
spinlock_t sadb_rx_lock; /* Protects sadb_rx */
struct xarray sadb;
struct mlx5e_ipsec_sw_stats sw_stats;
struct mlx5e_ipsec_hw_stats hw_stats;
struct workqueue_struct *wq;
struct mlx5e_accel_fs_esp *rx_fs;
struct mlx5e_ipsec_tx *tx_fs;
struct mlx5e_flow_steering *fs;
struct mlx5e_ipsec_rx *rx_ipv4;
struct mlx5e_ipsec_rx *rx_ipv6;
struct mlx5e_ipsec_tx *tx;
struct mlx5e_ipsec_aso *aso;
struct notifier_block nb;
};
struct mlx5e_ipsec_esn_state {
......@@ -123,7 +151,8 @@ struct mlx5e_ipsec_esn_state {
struct mlx5e_ipsec_rule {
struct mlx5_flow_handle *rule;
struct mlx5_modify_hdr *set_modify_hdr;
struct mlx5_modify_hdr *modify_hdr;
struct mlx5_pkt_reformat *pkt_reformat;
};
struct mlx5e_ipsec_modify_state_work {
......@@ -132,9 +161,7 @@ struct mlx5e_ipsec_modify_state_work {
};
struct mlx5e_ipsec_sa_entry {
struct hlist_node hlist; /* Item in SADB_RX hashtable */
struct mlx5e_ipsec_esn_state esn_state;
unsigned int handle; /* Handle in SADB_RX */
struct xfrm_state *x;
struct mlx5e_ipsec *ipsec;
struct mlx5_accel_esp_xfrm_attrs attrs;
......@@ -146,19 +173,43 @@ struct mlx5e_ipsec_sa_entry {
struct mlx5e_ipsec_modify_state_work modify_work;
};
struct mlx5_accel_pol_xfrm_attrs {
union {
__be32 a4;
__be32 a6[4];
} saddr;
union {
__be32 a4;
__be32 a6[4];
} daddr;
u8 family;
u8 action;
u8 type : 2;
u8 dir : 2;
u32 reqid;
};
struct mlx5e_ipsec_pol_entry {
struct xfrm_policy *x;
struct mlx5e_ipsec *ipsec;
struct mlx5e_ipsec_rule ipsec_rule;
struct mlx5_accel_pol_xfrm_attrs attrs;
};
#ifdef CONFIG_MLX5_EN_IPSEC
void mlx5e_ipsec_init(struct mlx5e_priv *priv);
void mlx5e_ipsec_cleanup(struct mlx5e_priv *priv);
void mlx5e_ipsec_build_netdev(struct mlx5e_priv *priv);
struct xfrm_state *mlx5e_ipsec_sadb_rx_lookup(struct mlx5e_ipsec *dev,
unsigned int handle);
void mlx5e_accel_ipsec_fs_cleanup(struct mlx5e_ipsec *ipsec);
int mlx5e_accel_ipsec_fs_init(struct mlx5e_ipsec *ipsec);
int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_priv *priv,
struct mlx5e_ipsec_sa_entry *sa_entry);
void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_priv *priv,
struct mlx5e_ipsec_sa_entry *sa_entry);
int mlx5e_accel_ipsec_fs_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry);
void mlx5e_accel_ipsec_fs_del_rule(struct mlx5e_ipsec_sa_entry *sa_entry);
int mlx5e_accel_ipsec_fs_add_pol(struct mlx5e_ipsec_pol_entry *pol_entry);
void mlx5e_accel_ipsec_fs_del_pol(struct mlx5e_ipsec_pol_entry *pol_entry);
int mlx5_ipsec_create_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry);
void mlx5_ipsec_free_sa_ctx(struct mlx5e_ipsec_sa_entry *sa_entry);
......@@ -168,11 +219,30 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev);
void mlx5_accel_esp_modify_xfrm(struct mlx5e_ipsec_sa_entry *sa_entry,
const struct mlx5_accel_esp_xfrm_attrs *attrs);
int mlx5e_ipsec_aso_init(struct mlx5e_ipsec *ipsec);
void mlx5e_ipsec_aso_cleanup(struct mlx5e_ipsec *ipsec);
int mlx5e_ipsec_aso_query(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_wqe_aso_ctrl_seg *data);
void mlx5e_ipsec_aso_update_curlft(struct mlx5e_ipsec_sa_entry *sa_entry,
u64 *packets);
void mlx5e_accel_ipsec_fs_read_stats(struct mlx5e_priv *priv,
void *ipsec_stats);
void mlx5e_ipsec_build_accel_xfrm_attrs(struct mlx5e_ipsec_sa_entry *sa_entry,
struct mlx5_accel_esp_xfrm_attrs *attrs);
static inline struct mlx5_core_dev *
mlx5e_ipsec_sa2dev(struct mlx5e_ipsec_sa_entry *sa_entry)
{
return sa_entry->ipsec->mdev;
}
static inline struct mlx5_core_dev *
mlx5e_ipsec_pol2dev(struct mlx5e_ipsec_pol_entry *pol_entry)
{
return pol_entry->ipsec->mdev;
}
#else
static inline void mlx5e_ipsec_init(struct mlx5e_priv *priv)
{
......
......@@ -312,27 +312,31 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev,
struct mlx5_cqe64 *cqe)
{
u32 ipsec_meta_data = be32_to_cpu(cqe->ft_metadata);
struct mlx5e_priv *priv;
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5e_ipsec *ipsec = priv->ipsec;
struct mlx5e_ipsec_sa_entry *sa_entry;
struct xfrm_offload *xo;
struct xfrm_state *xs;
struct sec_path *sp;
u32 sa_handle;
sa_handle = MLX5_IPSEC_METADATA_HANDLE(ipsec_meta_data);
priv = netdev_priv(netdev);
sp = secpath_set(skb);
if (unlikely(!sp)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_sp_alloc);
atomic64_inc(&ipsec->sw_stats.ipsec_rx_drop_sp_alloc);
return;
}
xs = mlx5e_ipsec_sadb_rx_lookup(priv->ipsec, sa_handle);
if (unlikely(!xs)) {
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_sadb_miss);
rcu_read_lock();
sa_entry = xa_load(&ipsec->sadb, sa_handle);
if (unlikely(!sa_entry)) {
rcu_read_unlock();
atomic64_inc(&ipsec->sw_stats.ipsec_rx_drop_sadb_miss);
return;
}
xfrm_state_hold(sa_entry->x);
rcu_read_unlock();
sp->xvec[sp->len++] = xs;
sp->xvec[sp->len++] = sa_entry->x;
sp->olen++;
xo = xfrm_offload(skb);
......@@ -349,6 +353,6 @@ void mlx5e_ipsec_offload_handle_rx_skb(struct net_device *netdev,
xo->status = CRYPTO_INVALID_PACKET_SYNTAX;
break;
default:
atomic64_inc(&priv->ipsec->sw_stats.ipsec_rx_drop_syndrome);
atomic64_inc(&ipsec->sw_stats.ipsec_rx_drop_syndrome);
}
}
......@@ -37,6 +37,17 @@
#include "en.h"
#include "ipsec.h"
static const struct counter_desc mlx5e_ipsec_hw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_rx_pkts) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_rx_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_rx_drop_pkts) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_rx_drop_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_tx_pkts) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_tx_bytes) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_tx_drop_pkts) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_hw_stats, ipsec_tx_drop_bytes) },
};
static const struct counter_desc mlx5e_ipsec_sw_stats_desc[] = {
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_rx_drop_sp_alloc) },
{ MLX5E_DECLARE_STAT(struct mlx5e_ipsec_sw_stats, ipsec_rx_drop_sadb_miss) },
......@@ -50,8 +61,48 @@ static const struct counter_desc mlx5e_ipsec_sw_stats_desc[] = {
#define MLX5E_READ_CTR_ATOMIC64(ptr, dsc, i) \
atomic64_read((atomic64_t *)((char *)(ptr) + (dsc)[i].offset))
#define NUM_IPSEC_HW_COUNTERS ARRAY_SIZE(mlx5e_ipsec_hw_stats_desc)
#define NUM_IPSEC_SW_COUNTERS ARRAY_SIZE(mlx5e_ipsec_sw_stats_desc)
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_hw)
{
if (!priv->ipsec)
return 0;
return NUM_IPSEC_HW_COUNTERS;
}
static inline MLX5E_DECLARE_STATS_GRP_OP_UPDATE_STATS(ipsec_hw) {}
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STRS(ipsec_hw)
{
unsigned int i;
if (!priv->ipsec)
return idx;
for (i = 0; i < NUM_IPSEC_HW_COUNTERS; i++)
strcpy(data + (idx++) * ETH_GSTRING_LEN,
mlx5e_ipsec_hw_stats_desc[i].format);
return idx;
}
static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(ipsec_hw)
{
int i;
if (!priv->ipsec)
return idx;
mlx5e_accel_ipsec_fs_read_stats(priv, &priv->ipsec->hw_stats);
for (i = 0; i < NUM_IPSEC_HW_COUNTERS; i++)
data[idx++] = MLX5E_READ_CTR_ATOMIC64(&priv->ipsec->hw_stats,
mlx5e_ipsec_hw_stats_desc, i);
return idx;
}
static MLX5E_DECLARE_STATS_GRP_OP_NUM_STATS(ipsec_sw)
{
return priv->ipsec ? NUM_IPSEC_SW_COUNTERS : 0;
......@@ -81,4 +132,5 @@ static MLX5E_DECLARE_STATS_GRP_OP_FILL_STATS(ipsec_sw)
return idx;
}
MLX5E_DEFINE_STATS_GRP(ipsec_hw, 0);
MLX5E_DEFINE_STATS_GRP(ipsec_sw, 0);
......@@ -2480,6 +2480,7 @@ mlx5e_stats_grp_t mlx5e_nic_stats_grps[] = {
&MLX5E_STATS_GRP(per_prio),
&MLX5E_STATS_GRP(pme),
#ifdef CONFIG_MLX5_EN_IPSEC
&MLX5E_STATS_GRP(ipsec_hw),
&MLX5E_STATS_GRP(ipsec_sw),
#endif
&MLX5E_STATS_GRP(tls),
......
......@@ -506,6 +506,7 @@ extern MLX5E_DECLARE_STATS_GRP(per_prio);
extern MLX5E_DECLARE_STATS_GRP(pme);
extern MLX5E_DECLARE_STATS_GRP(channels);
extern MLX5E_DECLARE_STATS_GRP(per_port_buff_congest);
extern MLX5E_DECLARE_STATS_GRP(ipsec_hw);
extern MLX5E_DECLARE_STATS_GRP(ipsec_sw);
extern MLX5E_DECLARE_STATS_GRP(ptp);
extern MLX5E_DECLARE_STATS_GRP(macsec_hw);
......
......@@ -19,6 +19,7 @@
#include "diag/fw_tracer.h"
#include "mlx5_irq.h"
#include "devlink.h"
#include "en_accel/ipsec.h"
enum {
MLX5_EQE_OWNER_INIT_VAL = 0x1,
......@@ -578,6 +579,10 @@ static void gather_async_events_mask(struct mlx5_core_dev *dev, u64 mask[4])
if (MLX5_CAP_MACSEC(dev, log_max_macsec_offload))
async_event_mask |= (1ull << MLX5_EVENT_TYPE_OBJECT_CHANGE);
if (mlx5_ipsec_device_caps(dev) & MLX5_IPSEC_CAP_PACKET_OFFLOAD)
async_event_mask |=
(1ull << MLX5_EVENT_TYPE_OBJECT_CHANGE);
mask[0] = async_event_mask;
if (MLX5_CAP_GEN(dev, event_cap))
......
......@@ -111,8 +111,8 @@
#define ETHTOOL_PRIO_NUM_LEVELS 1
#define ETHTOOL_NUM_PRIOS 11
#define ETHTOOL_MIN_LEVEL (KERNEL_MIN_LEVEL + ETHTOOL_NUM_PRIOS)
/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}} */
#define KERNEL_NIC_PRIO_NUM_LEVELS 7
/* Promiscuous, Vlan, mac, ttc, inner ttc, {UDP/ANY/aRFS/accel/{esp, esp_err}}, IPsec policy */
#define KERNEL_NIC_PRIO_NUM_LEVELS 8
#define KERNEL_NIC_NUM_PRIOS 1
/* One more level for tc */
#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
......@@ -133,7 +133,7 @@
#define LAG_MIN_LEVEL (OFFLOADS_MIN_LEVEL + KERNEL_RX_MACSEC_MIN_LEVEL + 1)
#define KERNEL_TX_IPSEC_NUM_PRIOS 1
#define KERNEL_TX_IPSEC_NUM_LEVELS 1
#define KERNEL_TX_IPSEC_NUM_LEVELS 2
#define KERNEL_TX_IPSEC_MIN_LEVEL (KERNEL_TX_IPSEC_NUM_LEVELS)
#define KERNEL_TX_MACSEC_NUM_PRIOS 1
......
......@@ -353,12 +353,15 @@ void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt,
cseg->general_id = cpu_to_be32(obj_id);
}
void *mlx5_aso_get_wqe(struct mlx5_aso *aso)
struct mlx5_aso_wqe *mlx5_aso_get_wqe(struct mlx5_aso *aso)
{
struct mlx5_aso_wqe *wqe;
u16 pi;
pi = mlx5_wq_cyc_ctr2ix(&aso->wq, aso->pc);
return mlx5_wq_cyc_get_wqe(&aso->wq, pi);
wqe = mlx5_wq_cyc_get_wqe(&aso->wq, pi);
memset(wqe, 0, sizeof(*wqe));
return wqe;
}
void mlx5_aso_post_wqe(struct mlx5_aso *aso, bool with_data,
......
......@@ -15,6 +15,7 @@
#define MLX5_WQE_CTRL_WQE_OPC_MOD_SHIFT 24
#define MLX5_MACSEC_ASO_DS_CNT (DIV_ROUND_UP(sizeof(struct mlx5_aso_wqe), MLX5_SEND_WQE_DS))
#define ASO_CTRL_READ_EN BIT(0)
struct mlx5_wqe_aso_ctrl_seg {
__be32 va_h;
__be32 va_l; /* include read_enable */
......@@ -71,13 +72,14 @@ enum {
};
enum {
MLX5_ACCESS_ASO_OPC_MOD_IPSEC = 0x0,
MLX5_ACCESS_ASO_OPC_MOD_FLOW_METER = 0x2,
MLX5_ACCESS_ASO_OPC_MOD_MACSEC = 0x5,
};
struct mlx5_aso;
void *mlx5_aso_get_wqe(struct mlx5_aso *aso);
struct mlx5_aso_wqe *mlx5_aso_get_wqe(struct mlx5_aso *aso);
void mlx5_aso_build_wqe(struct mlx5_aso *aso, u8 ds_cnt,
struct mlx5_aso_wqe *aso_wqe,
u32 obj_id, u32 opc_mode);
......
......@@ -302,6 +302,11 @@ static int nfp_net_xfrm_add_state(struct xfrm_state *x)
return -EINVAL;
}
if (x->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
nn_err(nn, "Unsupported xfrm offload tyoe\n");
return -EINVAL;
}
cfg->spi = ntohl(x->id.spi);
/* Hash/Authentication */
......
......@@ -149,6 +149,11 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs)
return -EINVAL;
}
if (xs->xso.type != XFRM_DEV_OFFLOAD_CRYPTO) {
netdev_err(dev, "Unsupported ipsec offload type\n");
return -EINVAL;
}
/* find the first unused index */
ret = nsim_ipsec_find_empty_idx(ipsec);
if (ret < 0) {
......
......@@ -446,7 +446,10 @@ struct mlx5_ifc_flow_table_prop_layout_bits {
u8 max_modify_header_actions[0x8];
u8 max_ft_level[0x8];
u8 reserved_at_40[0x6];
u8 reformat_add_esp_trasport[0x1];
u8 reserved_at_41[0x2];
u8 reformat_del_esp_trasport[0x1];
u8 reserved_at_44[0x2];
u8 execute_aso[0x1];
u8 reserved_at_47[0x19];
......@@ -639,8 +642,10 @@ struct mlx5_ifc_fte_match_set_misc2_bits {
u8 reserved_at_1a0[0x8];
u8 macsec_syndrome[0x8];
u8 ipsec_syndrome[0x8];
u8 reserved_at_1b8[0x8];
u8 reserved_at_1b0[0x50];
u8 reserved_at_1c0[0x40];
};
struct mlx5_ifc_fte_match_set_misc3_bits {
......@@ -6452,6 +6457,9 @@ enum mlx5_reformat_ctx_type {
MLX5_REFORMAT_TYPE_L2_TO_L2_TUNNEL = 0x2,
MLX5_REFORMAT_TYPE_L3_TUNNEL_TO_L2 = 0x3,
MLX5_REFORMAT_TYPE_L2_TO_L3_TUNNEL = 0x4,
MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV4 = 0x5,
MLX5_REFORMAT_TYPE_DEL_ESP_TRANSPORT = 0x8,
MLX5_REFORMAT_TYPE_ADD_ESP_TRANSPORT_OVER_IPV6 = 0xb,
MLX5_REFORMAT_TYPE_INSERT_HDR = 0xf,
MLX5_REFORMAT_TYPE_REMOVE_HDR = 0x10,
MLX5_REFORMAT_TYPE_ADD_MACSEC = 0x11,
......@@ -11631,6 +11639,41 @@ enum {
MLX5_IPSEC_OBJECT_ICV_LEN_16B,
};
enum {
MLX5_IPSEC_ASO_REG_C_0_1 = 0x0,
MLX5_IPSEC_ASO_REG_C_2_3 = 0x1,
MLX5_IPSEC_ASO_REG_C_4_5 = 0x2,
MLX5_IPSEC_ASO_REG_C_6_7 = 0x3,
};
enum {
MLX5_IPSEC_ASO_MODE = 0x0,
MLX5_IPSEC_ASO_REPLAY_PROTECTION = 0x1,
MLX5_IPSEC_ASO_INC_SN = 0x2,
};
struct mlx5_ifc_ipsec_aso_bits {
u8 valid[0x1];
u8 reserved_at_201[0x1];
u8 mode[0x2];
u8 window_sz[0x2];
u8 soft_lft_arm[0x1];
u8 hard_lft_arm[0x1];
u8 remove_flow_enable[0x1];
u8 esn_event_arm[0x1];
u8 reserved_at_20a[0x16];
u8 remove_flow_pkt_cnt[0x20];
u8 remove_flow_soft_lft[0x20];
u8 reserved_at_260[0x80];
u8 mode_parameter[0x20];
u8 replay_protection_window[0x100];
};
struct mlx5_ifc_ipsec_obj_bits {
u8 modify_field_select[0x40];
u8 full_offload[0x1];
......@@ -11652,7 +11695,11 @@ struct mlx5_ifc_ipsec_obj_bits {
u8 implicit_iv[0x40];
u8 reserved_at_100[0x700];
u8 reserved_at_100[0x8];
u8 ipsec_aso_access_pd[0x18];
u8 reserved_at_120[0xe0];
struct mlx5_ifc_ipsec_aso_bits ipsec_aso;
};
struct mlx5_ifc_create_ipsec_obj_in_bits {
......
......@@ -1041,6 +1041,10 @@ struct xfrmdev_ops {
bool (*xdo_dev_offload_ok) (struct sk_buff *skb,
struct xfrm_state *x);
void (*xdo_dev_state_advance_esn) (struct xfrm_state *x);
void (*xdo_dev_state_update_curlft) (struct xfrm_state *x);
int (*xdo_dev_policy_add) (struct xfrm_policy *x);
void (*xdo_dev_policy_delete) (struct xfrm_policy *x);
void (*xdo_dev_policy_free) (struct xfrm_policy *x);
};
#endif
......
......@@ -129,6 +129,13 @@ struct xfrm_state_walk {
enum {
XFRM_DEV_OFFLOAD_IN = 1,
XFRM_DEV_OFFLOAD_OUT,
XFRM_DEV_OFFLOAD_FWD,
};
enum {
XFRM_DEV_OFFLOAD_UNSPECIFIED,
XFRM_DEV_OFFLOAD_CRYPTO,
XFRM_DEV_OFFLOAD_PACKET,
};
struct xfrm_dev_offload {
......@@ -137,6 +144,7 @@ struct xfrm_dev_offload {
struct net_device *real_dev;
unsigned long offload_handle;
u8 dir : 2;
u8 type : 2;
};
struct xfrm_mode {
......@@ -534,6 +542,8 @@ struct xfrm_policy {
struct xfrm_tmpl xfrm_vec[XFRM_MAX_DEPTH];
struct hlist_node bydst_inexact_list;
struct rcu_head rcu;
struct xfrm_dev_offload xdo;
};
static inline struct net *xp_net(const struct xfrm_policy *xp)
......@@ -1092,6 +1102,29 @@ xfrm_state_addr_cmp(const struct xfrm_tmpl *tmpl, const struct xfrm_state *x, un
return !0;
}
#ifdef CONFIG_XFRM
static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb)
{
struct sec_path *sp = skb_sec_path(skb);
return sp->xvec[sp->len - 1];
}
#endif
static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb)
{
#ifdef CONFIG_XFRM
struct sec_path *sp = skb_sec_path(skb);
if (!sp || !sp->olen || sp->len != sp->olen)
return NULL;
return &sp->ovec[sp->olen - 1];
#else
return NULL;
#endif
}
#ifdef CONFIG_XFRM
int __xfrm_policy_check(struct sock *, int dir, struct sk_buff *skb,
unsigned short family);
......@@ -1123,10 +1156,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
{
struct net *net = dev_net(skb->dev);
int ndir = dir | (reverse ? XFRM_POLICY_MASK + 1 : 0);
struct xfrm_offload *xo = xfrm_offload(skb);
struct xfrm_state *x;
if (sk && sk->sk_policy[XFRM_POLICY_IN])
return __xfrm_policy_check(sk, ndir, skb, family);
if (xo) {
x = xfrm_input_state(skb);
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET)
return (xo->flags & CRYPTO_DONE) &&
(xo->status & CRYPTO_SUCCESS);
}
return __xfrm_check_nopolicy(net, skb, dir) ||
__xfrm_check_dev_nopolicy(skb, dir, family) ||
__xfrm_policy_check(sk, ndir, skb, family);
......@@ -1529,6 +1571,23 @@ struct xfrm_state *xfrm_stateonly_find(struct net *net, u32 mark, u32 if_id,
struct xfrm_state *xfrm_state_lookup_byspi(struct net *net, __be32 spi,
unsigned short family);
int xfrm_state_check_expire(struct xfrm_state *x);
#ifdef CONFIG_XFRM_OFFLOAD
static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x)
{
struct xfrm_dev_offload *xdo = &x->xso;
struct net_device *dev = xdo->dev;
if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
return;
if (dev && dev->xfrmdev_ops &&
dev->xfrmdev_ops->xdo_dev_state_update_curlft)
dev->xfrmdev_ops->xdo_dev_state_update_curlft(x);
}
#else
static inline void xfrm_dev_state_update_curlft(struct xfrm_state *x) {}
#endif
void xfrm_state_insert(struct xfrm_state *x);
int xfrm_state_add(struct xfrm_state *x);
int xfrm_state_update(struct xfrm_state *x);
......@@ -1578,6 +1637,8 @@ struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
int xfrm_state_delete(struct xfrm_state *x);
int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync);
int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid);
int xfrm_dev_policy_flush(struct net *net, struct net_device *dev,
bool task_valid);
void xfrm_sad_getinfo(struct net *net, struct xfrmk_sadinfo *si);
void xfrm_spd_getinfo(struct net *net, struct xfrmk_spdinfo *si);
u32 xfrm_replay_seqhi(struct xfrm_state *x, __be32 net_seq);
......@@ -1860,29 +1921,6 @@ static inline void xfrm_states_delete(struct xfrm_state **states, int n)
}
#endif
#ifdef CONFIG_XFRM
static inline struct xfrm_state *xfrm_input_state(struct sk_buff *skb)
{
struct sec_path *sp = skb_sec_path(skb);
return sp->xvec[sp->len - 1];
}
#endif
static inline struct xfrm_offload *xfrm_offload(struct sk_buff *skb)
{
#ifdef CONFIG_XFRM
struct sec_path *sp = skb_sec_path(skb);
if (!sp || !sp->olen || sp->len != sp->olen)
return NULL;
return &sp->ovec[sp->olen - 1];
#else
return NULL;
#endif
}
void __init xfrm_dev_init(void);
#ifdef CONFIG_XFRM_OFFLOAD
......@@ -1892,6 +1930,9 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
struct xfrm_user_offload *xuo,
struct netlink_ext_ack *extack);
int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
struct xfrm_user_offload *xuo, u8 dir,
struct netlink_ext_ack *extack);
bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x);
static inline void xfrm_dev_state_advance_esn(struct xfrm_state *x)
......@@ -1940,6 +1981,28 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x)
netdev_put(dev, &xso->dev_tracker);
}
}
static inline void xfrm_dev_policy_delete(struct xfrm_policy *x)
{
struct xfrm_dev_offload *xdo = &x->xdo;
struct net_device *dev = xdo->dev;
if (dev && dev->xfrmdev_ops && dev->xfrmdev_ops->xdo_dev_policy_delete)
dev->xfrmdev_ops->xdo_dev_policy_delete(x);
}
static inline void xfrm_dev_policy_free(struct xfrm_policy *x)
{
struct xfrm_dev_offload *xdo = &x->xdo;
struct net_device *dev = xdo->dev;
if (dev && dev->xfrmdev_ops) {
if (dev->xfrmdev_ops->xdo_dev_policy_free)
dev->xfrmdev_ops->xdo_dev_policy_free(x);
xdo->dev = NULL;
netdev_put(dev, &xdo->dev_tracker);
}
}
#else
static inline void xfrm_dev_resume(struct sk_buff *skb)
{
......@@ -1967,6 +2030,21 @@ static inline void xfrm_dev_state_free(struct xfrm_state *x)
{
}
static inline int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
struct xfrm_user_offload *xuo, u8 dir,
struct netlink_ext_ack *extack)
{
return 0;
}
static inline void xfrm_dev_policy_delete(struct xfrm_policy *x)
{
}
static inline void xfrm_dev_policy_free(struct xfrm_policy *x)
{
}
static inline bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
{
return false;
......
......@@ -519,6 +519,12 @@ struct xfrm_user_offload {
*/
#define XFRM_OFFLOAD_IPV6 1
#define XFRM_OFFLOAD_INBOUND 2
/* Two bits above are relevant for state path only, while
* offload is used for both policy and state flows.
*
* In policy offload mode, they are free and can be safely reused.
*/
#define XFRM_OFFLOAD_PACKET 4
struct xfrm_userpolicy_default {
#define XFRM_USERPOLICY_UNSPEC 0
......
......@@ -132,6 +132,16 @@ struct sk_buff *validate_xmit_xfrm(struct sk_buff *skb, netdev_features_t featur
if (xo->flags & XFRM_GRO || x->xso.dir == XFRM_DEV_OFFLOAD_IN)
return skb;
/* The packet was sent to HW IPsec packet offload engine,
* but to wrong device. Drop the packet, so it won't skip
* XFRM stack.
*/
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET && x->xso.dev != dev) {
kfree_skb(skb);
dev_core_stats_tx_dropped_inc(dev);
return NULL;
}
/* This skb was already validated on the upper/virtual dev */
if ((x->xso.dev != dev) && (x->xso.real_dev == dev))
return skb;
......@@ -229,6 +239,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
struct xfrm_dev_offload *xso = &x->xso;
xfrm_address_t *saddr;
xfrm_address_t *daddr;
bool is_packet_offload;
if (!x->type_offload) {
NL_SET_ERR_MSG(extack, "Type doesn't support offload");
......@@ -241,11 +252,13 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
return -EINVAL;
}
if (xuo->flags & ~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND)) {
if (xuo->flags &
~(XFRM_OFFLOAD_IPV6 | XFRM_OFFLOAD_INBOUND | XFRM_OFFLOAD_PACKET)) {
NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request");
return -EINVAL;
}
is_packet_offload = xuo->flags & XFRM_OFFLOAD_PACKET;
dev = dev_get_by_index(net, xuo->ifindex);
if (!dev) {
if (!(xuo->flags & XFRM_OFFLOAD_INBOUND)) {
......@@ -260,7 +273,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
x->props.family,
xfrm_smark_get(0, x));
if (IS_ERR(dst))
return 0;
return (is_packet_offload) ? -EINVAL : 0;
dev = dst->dev;
......@@ -271,7 +284,7 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_state_add) {
xso->dev = NULL;
dev_put(dev);
return 0;
return (is_packet_offload) ? -EINVAL : 0;
}
if (x->props.flags & XFRM_STATE_ESN &&
......@@ -291,14 +304,28 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
else
xso->dir = XFRM_DEV_OFFLOAD_OUT;
if (is_packet_offload)
xso->type = XFRM_DEV_OFFLOAD_PACKET;
else
xso->type = XFRM_DEV_OFFLOAD_CRYPTO;
err = dev->xfrmdev_ops->xdo_dev_state_add(x);
if (err) {
xso->dev = NULL;
xso->dir = 0;
xso->real_dev = NULL;
netdev_put(dev, &xso->dev_tracker);
if (err != -EOPNOTSUPP) {
xso->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
/* User explicitly requested packet offload mode and configured
* policy in addition to the XFRM state. So be civil to users,
* and return an error instead of taking fallback path.
*
* This WARN_ON() can be seen as a documentation for driver
* authors to do not return -EOPNOTSUPP in packet offload mode.
*/
WARN_ON(err == -EOPNOTSUPP && is_packet_offload);
if (err != -EOPNOTSUPP || is_packet_offload) {
NL_SET_ERR_MSG(extack, "Device failed to offload this state");
return err;
}
......@@ -308,6 +335,69 @@ int xfrm_dev_state_add(struct net *net, struct xfrm_state *x,
}
EXPORT_SYMBOL_GPL(xfrm_dev_state_add);
int xfrm_dev_policy_add(struct net *net, struct xfrm_policy *xp,
struct xfrm_user_offload *xuo, u8 dir,
struct netlink_ext_ack *extack)
{
struct xfrm_dev_offload *xdo = &xp->xdo;
struct net_device *dev;
int err;
if (!xuo->flags || xuo->flags & ~XFRM_OFFLOAD_PACKET) {
/* We support only packet offload mode and it means
* that user must set XFRM_OFFLOAD_PACKET bit.
*/
NL_SET_ERR_MSG(extack, "Unrecognized flags in offload request");
return -EINVAL;
}
dev = dev_get_by_index(net, xuo->ifindex);
if (!dev)
return -EINVAL;
if (!dev->xfrmdev_ops || !dev->xfrmdev_ops->xdo_dev_policy_add) {
xdo->dev = NULL;
dev_put(dev);
NL_SET_ERR_MSG(extack, "Policy offload is not supported");
return -EINVAL;
}
xdo->dev = dev;
netdev_tracker_alloc(dev, &xdo->dev_tracker, GFP_ATOMIC);
xdo->real_dev = dev;
xdo->type = XFRM_DEV_OFFLOAD_PACKET;
switch (dir) {
case XFRM_POLICY_IN:
xdo->dir = XFRM_DEV_OFFLOAD_IN;
break;
case XFRM_POLICY_OUT:
xdo->dir = XFRM_DEV_OFFLOAD_OUT;
break;
case XFRM_POLICY_FWD:
xdo->dir = XFRM_DEV_OFFLOAD_FWD;
break;
default:
xdo->dev = NULL;
dev_put(dev);
NL_SET_ERR_MSG(extack, "Unrecognized offload direction");
return -EINVAL;
}
err = dev->xfrmdev_ops->xdo_dev_policy_add(xp);
if (err) {
xdo->dev = NULL;
xdo->real_dev = NULL;
xdo->type = XFRM_DEV_OFFLOAD_UNSPECIFIED;
xdo->dir = 0;
netdev_put(dev, &xdo->dev_tracker);
NL_SET_ERR_MSG(extack, "Device failed to offload this policy");
return err;
}
return 0;
}
EXPORT_SYMBOL_GPL(xfrm_dev_policy_add);
bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
{
int mtu;
......@@ -318,8 +408,9 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x)
if (!x->type_offload || x->encap)
return false;
if ((!dev || (dev == xfrm_dst_path(dst)->dev)) &&
(!xdst->child->xfrm)) {
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET ||
((!dev || (dev == xfrm_dst_path(dst)->dev)) &&
!xdst->child->xfrm)) {
mtu = xfrm_state_mtu(x, xdst->child_mtu_cached);
if (skb->len <= mtu)
goto ok;
......@@ -410,8 +501,10 @@ static int xfrm_api_check(struct net_device *dev)
static int xfrm_dev_down(struct net_device *dev)
{
if (dev->features & NETIF_F_HW_ESP)
if (dev->features & NETIF_F_HW_ESP) {
xfrm_dev_state_flush(dev_net(dev), dev, true);
xfrm_dev_policy_flush(dev_net(dev), dev, true);
}
return NOTIFY_DONE;
}
......
......@@ -492,7 +492,7 @@ static int xfrm_output_one(struct sk_buff *skb, int err)
struct xfrm_state *x = dst->xfrm;
struct net *net = xs_net(x);
if (err <= 0)
if (err <= 0 || x->xso.type == XFRM_DEV_OFFLOAD_PACKET)
goto resume;
do {
......@@ -717,6 +717,16 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
break;
}
if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) {
if (!xfrm_dev_offload_ok(skb, x)) {
XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR);
kfree_skb(skb);
return -EHOSTUNREACH;
}
return xfrm_output_resume(sk, skb, 0);
}
secpath_reset(skb);
if (xfrm_dev_offload_ok(skb, x)) {
......
......@@ -425,6 +425,7 @@ void xfrm_policy_destroy(struct xfrm_policy *policy)
if (del_timer(&policy->timer) || del_timer(&policy->polq.hold_timer))
BUG();
xfrm_dev_policy_free(policy);
call_rcu(&policy->rcu, xfrm_policy_destroy_rcu);
}
EXPORT_SYMBOL(xfrm_policy_destroy);
......@@ -535,7 +536,7 @@ static void xfrm_dst_hash_transfer(struct net *net,
__get_hash_thresh(net, pol->family, dir, &dbits, &sbits);
h = __addr_hash(&pol->selector.daddr, &pol->selector.saddr,
pol->family, nhashmask, dbits, sbits);
if (!entry0) {
if (!entry0 || pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
hlist_del_rcu(&pol->bydst);
hlist_add_head_rcu(&pol->bydst, ndsttable + h);
h0 = h;
......@@ -866,7 +867,7 @@ static void xfrm_policy_inexact_list_reinsert(struct net *net,
break;
}
if (newpos)
if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_PACKET)
hlist_add_behind_rcu(&policy->bydst, newpos);
else
hlist_add_head_rcu(&policy->bydst, &n->hhead);
......@@ -1347,7 +1348,7 @@ static void xfrm_hash_rebuild(struct work_struct *work)
else
break;
}
if (newpos)
if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_PACKET)
hlist_add_behind_rcu(&policy->bydst, newpos);
else
hlist_add_head_rcu(&policy->bydst, chain);
......@@ -1524,7 +1525,7 @@ static void xfrm_policy_insert_inexact_list(struct hlist_head *chain,
break;
}
if (newpos)
if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_PACKET)
hlist_add_behind_rcu(&policy->bydst_inexact_list, newpos);
else
hlist_add_head_rcu(&policy->bydst_inexact_list, chain);
......@@ -1561,9 +1562,12 @@ static struct xfrm_policy *xfrm_policy_insert_list(struct hlist_head *chain,
break;
}
if (newpos)
if (newpos && policy->xdo.type != XFRM_DEV_OFFLOAD_PACKET)
hlist_add_behind_rcu(&policy->bydst, &newpos->bydst);
else
/* Packet offload policies enter to the head
* to speed-up lookups.
*/
hlist_add_head_rcu(&policy->bydst, chain);
return delpol;
......@@ -1769,12 +1773,41 @@ xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid)
}
return err;
}
static inline int xfrm_dev_policy_flush_secctx_check(struct net *net,
struct net_device *dev,
bool task_valid)
{
struct xfrm_policy *pol;
int err = 0;
list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) {
if (pol->walk.dead ||
xfrm_policy_id2dir(pol->index) >= XFRM_POLICY_MAX ||
pol->xdo.dev != dev)
continue;
err = security_xfrm_policy_delete(pol->security);
if (err) {
xfrm_audit_policy_delete(pol, 0, task_valid);
return err;
}
}
return err;
}
#else
static inline int
xfrm_policy_flush_secctx_check(struct net *net, u8 type, bool task_valid)
{
return 0;
}
static inline int xfrm_dev_policy_flush_secctx_check(struct net *net,
struct net_device *dev,
bool task_valid)
{
return 0;
}
#endif
int xfrm_policy_flush(struct net *net, u8 type, bool task_valid)
......@@ -1814,6 +1847,44 @@ int xfrm_policy_flush(struct net *net, u8 type, bool task_valid)
}
EXPORT_SYMBOL(xfrm_policy_flush);
int xfrm_dev_policy_flush(struct net *net, struct net_device *dev,
bool task_valid)
{
int dir, err = 0, cnt = 0;
struct xfrm_policy *pol;
spin_lock_bh(&net->xfrm.xfrm_policy_lock);
err = xfrm_dev_policy_flush_secctx_check(net, dev, task_valid);
if (err)
goto out;
again:
list_for_each_entry(pol, &net->xfrm.policy_all, walk.all) {
dir = xfrm_policy_id2dir(pol->index);
if (pol->walk.dead ||
dir >= XFRM_POLICY_MAX ||
pol->xdo.dev != dev)
continue;
__xfrm_policy_unlink(pol, dir);
spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
cnt++;
xfrm_audit_policy_delete(pol, 1, task_valid);
xfrm_policy_kill(pol);
spin_lock_bh(&net->xfrm.xfrm_policy_lock);
goto again;
}
if (cnt)
__xfrm_policy_inexact_flush(net);
else
err = -ESRCH;
out:
spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
return err;
}
EXPORT_SYMBOL(xfrm_dev_policy_flush);
int xfrm_policy_walk(struct net *net, struct xfrm_policy_walk *walk,
int (*func)(struct xfrm_policy *, int, int, void*),
void *data)
......@@ -2113,6 +2184,9 @@ static struct xfrm_policy *xfrm_policy_lookup_bytype(struct net *net, u8 type,
break;
}
}
if (ret && ret->xdo.type == XFRM_DEV_OFFLOAD_PACKET)
goto skip_inexact;
bin = xfrm_policy_inexact_lookup_rcu(net, type, family, dir, if_id);
if (!bin || !xfrm_policy_find_inexact_candidates(&cand, bin, saddr,
daddr))
......@@ -2245,6 +2319,7 @@ int xfrm_policy_delete(struct xfrm_policy *pol, int dir)
pol = __xfrm_policy_unlink(pol, dir);
spin_unlock_bh(&net->xfrm.xfrm_policy_lock);
if (pol) {
xfrm_dev_policy_delete(pol);
xfrm_policy_kill(pol);
return 0;
}
......
This diff is collapsed.
......@@ -956,6 +956,8 @@ static int copy_user_offload(struct xfrm_dev_offload *xso, struct sk_buff *skb)
xuo->ifindex = xso->dev->ifindex;
if (xso->dir == XFRM_DEV_OFFLOAD_IN)
xuo->flags = XFRM_OFFLOAD_INBOUND;
if (xso->type == XFRM_DEV_OFFLOAD_PACKET)
xuo->flags |= XFRM_OFFLOAD_PACKET;
return 0;
}
......@@ -1890,6 +1892,15 @@ static struct xfrm_policy *xfrm_policy_construct(struct net *net,
if (attrs[XFRMA_IF_ID])
xp->if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
/* configure the hardware if offload is requested */
if (attrs[XFRMA_OFFLOAD_DEV]) {
err = xfrm_dev_policy_add(net, xp,
nla_data(attrs[XFRMA_OFFLOAD_DEV]),
p->dir, extack);
if (err)
goto error;
}
return xp;
error:
*errp = err;
......@@ -1929,6 +1940,7 @@ static int xfrm_add_policy(struct sk_buff *skb, struct nlmsghdr *nlh,
xfrm_audit_policy_add(xp, err ? 0 : 1, true);
if (err) {
xfrm_dev_policy_delete(xp);
security_xfrm_policy_free(xp->security);
kfree(xp);
return err;
......@@ -2041,6 +2053,8 @@ static int dump_one_policy(struct xfrm_policy *xp, int dir, int count, void *ptr
err = xfrm_mark_put(skb, &xp->mark);
if (!err)
err = xfrm_if_id_put(skb, xp->if_id);
if (!err && xp->xdo.dev)
err = copy_user_offload(&xp->xdo, skb);
if (err) {
nlmsg_cancel(skb, nlh);
return err;
......@@ -3379,6 +3393,8 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x,
err = xfrm_mark_put(skb, &xp->mark);
if (!err)
err = xfrm_if_id_put(skb, xp->if_id);
if (!err && xp->xdo.dev)
err = copy_user_offload(&xp->xdo, skb);
if (err) {
nlmsg_cancel(skb, nlh);
return err;
......@@ -3497,6 +3513,8 @@ static int build_polexpire(struct sk_buff *skb, struct xfrm_policy *xp,
err = xfrm_mark_put(skb, &xp->mark);
if (!err)
err = xfrm_if_id_put(skb, xp->if_id);
if (!err && xp->xdo.dev)
err = copy_user_offload(&xp->xdo, skb);
if (err) {
nlmsg_cancel(skb, nlh);
return err;
......@@ -3580,6 +3598,8 @@ static int xfrm_notify_policy(struct xfrm_policy *xp, int dir, const struct km_e
err = xfrm_mark_put(skb, &xp->mark);
if (!err)
err = xfrm_if_id_put(skb, xp->if_id);
if (!err && xp->xdo.dev)
err = copy_user_offload(&xp->xdo, skb);
if (err)
goto out_free_skb;
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment