Networking fixes for 5.16-rc4, including fixes from wireless,

and wireguard.
 
 Current release - regressions:
 
  - smc: keep smc_close_final()'s error code during active close
 
 Current release - new code bugs:
 
  - iwlwifi: various static checker fixes (int overflow, leaks, missing
    error codes)
 
  - rtw89: fix size of firmware header before transfer, avoid crash
 
  - mt76: fix timestamp check in tx_status; fix pktid leak;
 
  - mscc: ocelot: fix missing unlock on error in ocelot_hwstamp_set()
 
 Previous releases - regressions:
 
  - smc: fix list corruption in smc_lgr_cleanup_early
 
  - ipv4: convert fib_num_tclassid_users to atomic_t
 
 Previous releases - always broken:
 
  - tls: fix authentication failure in CCM mode
 
  - vrf: reset IPCB/IP6CB when processing outbound pkts, prevent
    incorrect processing
 
  - dsa: mv88e6xxx: fixes for various device errata
 
  - rds: correct socket tunable error in rds_tcp_tune()
 
  - ipv6: fix memory leak in fib6_rule_suppress
 
  - wireguard: reset peer src endpoint when netns exits
 
  - wireguard: improve resilience to DoS around incoming handshakes
 
  - tcp: fix page frag corruption on page fault which involves TCP
 
  - mpls: fix missing attributes in delete notifications
 
  - mt7915: fix NULL pointer dereference with ad-hoc mode
 
 Misc:
 
  - rt2x00: be more lenient about EPROTO errors during start
 
  - mlx4_en: update reported link modes for 1/10G
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmGo6KMACgkQMUZtbf5S
 Iruq+BAAhRMTcL+X4eRIL9lIEvWEKHMKLCA/pUaQWNlSxsbEeydJWRNSc37Cs3pv
 z0rYIEhfieOz8+QXS1Kq+yZwJVXjA8Jvgld2qw9V9Y5w+N15Mj8RUtG8NaUw+o4E
 U8PCAbaamnbzyPdlCYcVHschd8MD0BCXm5+jAGeIyCP+KQCnhEpFZv+bvHaWzQR8
 FZLYrhXTR9W0DFsrKG9+haqFwFBR3+VDqTGILhaHPE+r2o6wKQQ5yJMhd8fq0SaC
 nne8zDkGuFEeW3cxj0VbhdRMyrV97eMK+P4dZ2P0Z7xcrsed9/2XJkNQNJGtuRnj
 GGJV6utupJRAY+lnJNUkifqS4Wt7KirfZsSsyaKKa4plyoVgtGhiqEYFTQVLagC0
 CF4Qe+3qks6rESbRu6PEFN4oWSkMEhRzdcDpg7vBDURUKcrRs9fgtNUJUCi8nKFA
 A/F/K+7IHBoBZyQYZbYmnGdNsNauKbF3rUY3hwMGBfQZIr/wsql9+jhtLsmZX77m
 V/L7KzT2jhhNc5gDzuLps25K3P7snKuV19qQSsY2LeuGj1x3gmWZ+ibN6ynhB+Gt
 KBnfHDMTI/4aciZBIbwJmwfeRhCF8tOfw0WZdUP7FRIXukbfVuDBoznWLz4BKKgf
 GSYSTNDs/PHZQo5vCQ/onvTwUK5aN6zoPNy5ih7lp9YZBYtN2TI=
 =r0Jh
 -----END PGP SIGNATURE-----

Merge tag 'net-5.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Pull networking fixes from Jakub Kicinski:
 "Including fixes from wireless, and wireguard.

  Mostly scattered driver changes this week, with one big clump in
  mv88e6xxx. Nothing of note, really.

  Current release - regressions:

   - smc: keep smc_close_final()'s error code during active close

  Current release - new code bugs:

   - iwlwifi: various static checker fixes (int overflow, leaks, missing
     error codes)

   - rtw89: fix size of firmware header before transfer, avoid crash

   - mt76: fix timestamp check in tx_status; fix pktid leak;

   - mscc: ocelot: fix missing unlock on error in ocelot_hwstamp_set()

  Previous releases - regressions:

   - smc: fix list corruption in smc_lgr_cleanup_early

   - ipv4: convert fib_num_tclassid_users to atomic_t

  Previous releases - always broken:

   - tls: fix authentication failure in CCM mode

   - vrf: reset IPCB/IP6CB when processing outbound pkts, prevent
     incorrect processing

   - dsa: mv88e6xxx: fixes for various device errata

   - rds: correct socket tunable error in rds_tcp_tune()

   - ipv6: fix memory leak in fib6_rule_suppress

   - wireguard: reset peer src endpoint when netns exits

   - wireguard: improve resilience to DoS around incoming handshakes

   - tcp: fix page frag corruption on page fault which involves TCP

   - mpls: fix missing attributes in delete notifications

   - mt7915: fix NULL pointer dereference with ad-hoc mode

  Misc:

   - rt2x00: be more lenient about EPROTO errors during start

   - mlx4_en: update reported link modes for 1/10G"

* tag 'net-5.16-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (85 commits)
  net: dsa: b53: Add SPI ID table
  gro: Fix inconsistent indenting
  selftests: net: Correct case name
  net/rds: correct socket tunable error in rds_tcp_tune()
  mctp: Don't let RTM_DELROUTE delete local routes
  net/smc: Keep smc_close_final rc during active close
  ibmvnic: drop bad optimization in reuse_tx_pools()
  ibmvnic: drop bad optimization in reuse_rx_pools()
  net/smc: fix wrong list_del in smc_lgr_cleanup_early
  Fix Comment of ETH_P_802_3_MIN
  ethernet: aquantia: Try MAC address from device tree
  ipv4: convert fib_num_tclassid_users to atomic_t
  net: avoid uninit-value from tcp_conn_request
  net: annotate data-races on txq->xmit_lock_owner
  octeontx2-af: Fix a memleak bug in rvu_mbox_init()
  net/mlx4_en: Fix an use-after-free bug in mlx4_en_try_alloc_resources()
  vrf: Reset IPCB/IP6CB when processing outbound pkts in vrf dev xmit
  net: qlogic: qlcnic: Fix a NULL pointer dereference in qlcnic_83xx_add_rings()
  net: dsa: mv88e6xxx: Link in pcs_get_state() if AN is bypassed
  net: dsa: mv88e6xxx: Fix inband AN for 2500base-x on 88E6393X family
  ...
This commit is contained in:
Linus Torvalds 2021-12-02 11:22:06 -08:00
commit a51e3ac43d
104 changed files with 998 additions and 410 deletions

View file

@ -16624,7 +16624,8 @@ F: drivers/iommu/s390-iommu.c
S390 IUCV NETWORK LAYER S390 IUCV NETWORK LAYER
M: Julian Wiedmann <jwi@linux.ibm.com> M: Julian Wiedmann <jwi@linux.ibm.com>
M: Karsten Graul <kgraul@linux.ibm.com> M: Alexandra Winter <wintera@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
@ -16635,7 +16636,8 @@ F: net/iucv/
S390 NETWORK DRIVERS S390 NETWORK DRIVERS
M: Julian Wiedmann <jwi@linux.ibm.com> M: Julian Wiedmann <jwi@linux.ibm.com>
M: Karsten Graul <kgraul@linux.ibm.com> M: Alexandra Winter <wintera@linux.ibm.com>
M: Wenjia Zhang <wenjia@linux.ibm.com>
L: linux-s390@vger.kernel.org L: linux-s390@vger.kernel.org
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported

View file

@ -349,6 +349,19 @@ static const struct of_device_id b53_spi_of_match[] = {
}; };
MODULE_DEVICE_TABLE(of, b53_spi_of_match); MODULE_DEVICE_TABLE(of, b53_spi_of_match);
static const struct spi_device_id b53_spi_ids[] = {
{ .name = "bcm5325" },
{ .name = "bcm5365" },
{ .name = "bcm5395" },
{ .name = "bcm5397" },
{ .name = "bcm5398" },
{ .name = "bcm53115" },
{ .name = "bcm53125" },
{ .name = "bcm53128" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(spi, b53_spi_ids);
static struct spi_driver b53_spi_driver = { static struct spi_driver b53_spi_driver = {
.driver = { .driver = {
.name = "b53-switch", .name = "b53-switch",
@ -357,6 +370,7 @@ static struct spi_driver b53_spi_driver = {
.probe = b53_spi_probe, .probe = b53_spi_probe,
.remove = b53_spi_remove, .remove = b53_spi_remove,
.shutdown = b53_spi_shutdown, .shutdown = b53_spi_shutdown,
.id_table = b53_spi_ids,
}; };
module_spi_driver(b53_spi_driver); module_spi_driver(b53_spi_driver);

View file

@ -50,11 +50,22 @@ static int mv88e6390_serdes_write(struct mv88e6xxx_chip *chip,
} }
static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,
u16 status, u16 lpa, u16 ctrl, u16 status, u16 lpa,
struct phylink_link_state *state) struct phylink_link_state *state)
{ {
state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK);
if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) { if (status & MV88E6390_SGMII_PHY_STATUS_SPD_DPL_VALID) {
state->link = !!(status & MV88E6390_SGMII_PHY_STATUS_LINK); /* The Spped and Duplex Resolved register is 1 if AN is enabled
* and complete, or if AN is disabled. So with disabled AN we
* still get here on link up. But we want to set an_complete
* only if AN was enabled, thus we look at BMCR_ANENABLE.
* (According to 802.3-2008 section 22.2.4.2.10, we should be
* able to get this same value from BMSR_ANEGCAPABLE, but tests
* show that these Marvell PHYs don't conform to this part of
* the specificaion - BMSR_ANEGCAPABLE is simply always 1.)
*/
state->an_complete = !!(ctrl & BMCR_ANENABLE);
state->duplex = status & state->duplex = status &
MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ? MV88E6390_SGMII_PHY_STATUS_DUPLEX_FULL ?
DUPLEX_FULL : DUPLEX_HALF; DUPLEX_FULL : DUPLEX_HALF;
@ -81,6 +92,18 @@ static int mv88e6xxx_serdes_pcs_get_state(struct mv88e6xxx_chip *chip,
dev_err(chip->dev, "invalid PHY speed\n"); dev_err(chip->dev, "invalid PHY speed\n");
return -EINVAL; return -EINVAL;
} }
} else if (state->link &&
state->interface != PHY_INTERFACE_MODE_SGMII) {
/* If Speed and Duplex Resolved register is 0 and link is up, it
* means that AN was enabled, but link partner had it disabled
* and the PHY invoked the Auto-Negotiation Bypass feature and
* linked anyway.
*/
state->duplex = DUPLEX_FULL;
if (state->interface == PHY_INTERFACE_MODE_2500BASEX)
state->speed = SPEED_2500;
else
state->speed = SPEED_1000;
} else { } else {
state->link = false; state->link = false;
} }
@ -168,9 +191,15 @@ int mv88e6352_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port, int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
int lane, struct phylink_link_state *state) int lane, struct phylink_link_state *state)
{ {
u16 lpa, status; u16 lpa, status, ctrl;
int err; int err;
err = mv88e6352_serdes_read(chip, MII_BMCR, &ctrl);
if (err) {
dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
return err;
}
err = mv88e6352_serdes_read(chip, 0x11, &status); err = mv88e6352_serdes_read(chip, 0x11, &status);
if (err) { if (err) {
dev_err(chip->dev, "can't read Serdes PHY status: %d\n", err); dev_err(chip->dev, "can't read Serdes PHY status: %d\n", err);
@ -183,7 +212,7 @@ int mv88e6352_serdes_pcs_get_state(struct mv88e6xxx_chip *chip, int port,
return err; return err;
} }
return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state); return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
} }
int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port, int mv88e6352_serdes_pcs_an_restart(struct mv88e6xxx_chip *chip, int port,
@ -883,9 +912,16 @@ int mv88e6390_serdes_pcs_config(struct mv88e6xxx_chip *chip, int port,
static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip, static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
int port, int lane, struct phylink_link_state *state) int port, int lane, struct phylink_link_state *state)
{ {
u16 lpa, status; u16 lpa, status, ctrl;
int err; int err;
err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
MV88E6390_SGMII_BMCR, &ctrl);
if (err) {
dev_err(chip->dev, "can't read Serdes PHY control: %d\n", err);
return err;
}
err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS, err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
MV88E6390_SGMII_PHY_STATUS, &status); MV88E6390_SGMII_PHY_STATUS, &status);
if (err) { if (err) {
@ -900,7 +936,7 @@ static int mv88e6390_serdes_pcs_get_state_sgmii(struct mv88e6xxx_chip *chip,
return err; return err;
} }
return mv88e6xxx_serdes_pcs_get_state(chip, status, lpa, state); return mv88e6xxx_serdes_pcs_get_state(chip, ctrl, status, lpa, state);
} }
static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip, static int mv88e6390_serdes_pcs_get_state_10g(struct mv88e6xxx_chip *chip,
@ -1271,9 +1307,31 @@ void mv88e6390_serdes_get_regs(struct mv88e6xxx_chip *chip, int port, void *_p)
} }
} }
static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane) static int mv88e6393x_serdes_power_lane(struct mv88e6xxx_chip *chip, int lane,
bool on)
{ {
u16 reg, pcs; u16 reg;
int err;
err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
MV88E6393X_SERDES_CTRL1, &reg);
if (err)
return err;
if (on)
reg &= ~(MV88E6393X_SERDES_CTRL1_TX_PDOWN |
MV88E6393X_SERDES_CTRL1_RX_PDOWN);
else
reg |= MV88E6393X_SERDES_CTRL1_TX_PDOWN |
MV88E6393X_SERDES_CTRL1_RX_PDOWN;
return mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,
MV88E6393X_SERDES_CTRL1, reg);
}
static int mv88e6393x_serdes_erratum_4_6(struct mv88e6xxx_chip *chip, int lane)
{
u16 reg;
int err; int err;
/* mv88e6393x family errata 4.6: /* mv88e6393x family errata 4.6:
@ -1284,26 +1342,45 @@ static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane)
* It seems that after this workaround the SERDES is automatically * It seems that after this workaround the SERDES is automatically
* powered up (the bit is cleared), so power it down. * powered up (the bit is cleared), so power it down.
*/ */
if (lane == MV88E6393X_PORT0_LANE || lane == MV88E6393X_PORT9_LANE || err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
lane == MV88E6393X_PORT10_LANE) { MV88E6393X_SERDES_POC, &reg);
err = mv88e6390_serdes_read(chip, lane, if (err)
MDIO_MMD_PHYXS, return err;
MV88E6393X_SERDES_POC, &reg);
if (err)
return err;
reg &= ~MV88E6393X_SERDES_POC_PDOWN; reg &= ~MV88E6393X_SERDES_POC_PDOWN;
reg |= MV88E6393X_SERDES_POC_RESET; reg |= MV88E6393X_SERDES_POC_RESET;
err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS, err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,
MV88E6393X_SERDES_POC, reg); MV88E6393X_SERDES_POC, reg);
if (err) if (err)
return err; return err;
err = mv88e6390_serdes_power_sgmii(chip, lane, false); err = mv88e6390_serdes_power_sgmii(chip, lane, false);
if (err) if (err)
return err; return err;
}
return mv88e6393x_serdes_power_lane(chip, lane, false);
}
int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip)
{
int err;
err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT0_LANE);
if (err)
return err;
err = mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT9_LANE);
if (err)
return err;
return mv88e6393x_serdes_erratum_4_6(chip, MV88E6393X_PORT10_LANE);
}
static int mv88e6393x_serdes_erratum_4_8(struct mv88e6xxx_chip *chip, int lane)
{
u16 reg, pcs;
int err;
/* mv88e6393x family errata 4.8: /* mv88e6393x family errata 4.8:
* When a SERDES port is operating in 1000BASE-X or SGMII mode link may * When a SERDES port is operating in 1000BASE-X or SGMII mode link may
@ -1334,38 +1411,149 @@ static int mv88e6393x_serdes_port_errata(struct mv88e6xxx_chip *chip, int lane)
MV88E6393X_ERRATA_4_8_REG, reg); MV88E6393X_ERRATA_4_8_REG, reg);
} }
int mv88e6393x_serdes_setup_errata(struct mv88e6xxx_chip *chip) static int mv88e6393x_serdes_erratum_5_2(struct mv88e6xxx_chip *chip, int lane,
u8 cmode)
{ {
static const struct {
u16 dev, reg, val, mask;
} fixes[] = {
{ MDIO_MMD_VEND1, 0x8093, 0xcb5a, 0xffff },
{ MDIO_MMD_VEND1, 0x8171, 0x7088, 0xffff },
{ MDIO_MMD_VEND1, 0x80c9, 0x311a, 0xffff },
{ MDIO_MMD_VEND1, 0x80a2, 0x8000, 0xff7f },
{ MDIO_MMD_VEND1, 0x80a9, 0x0000, 0xfff0 },
{ MDIO_MMD_VEND1, 0x80a3, 0x0000, 0xf8ff },
{ MDIO_MMD_PHYXS, MV88E6393X_SERDES_POC,
MV88E6393X_SERDES_POC_RESET, MV88E6393X_SERDES_POC_RESET },
};
int err, i;
u16 reg;
/* mv88e6393x family errata 5.2:
* For optimal signal integrity the following sequence should be applied
* to SERDES operating in 10G mode. These registers only apply to 10G
* operation and have no effect on other speeds.
*/
if (cmode != MV88E6393X_PORT_STS_CMODE_10GBASER)
return 0;
for (i = 0; i < ARRAY_SIZE(fixes); ++i) {
err = mv88e6390_serdes_read(chip, lane, fixes[i].dev,
fixes[i].reg, &reg);
if (err)
return err;
reg &= ~fixes[i].mask;
reg |= fixes[i].val;
err = mv88e6390_serdes_write(chip, lane, fixes[i].dev,
fixes[i].reg, reg);
if (err)
return err;
}
return 0;
}
static int mv88e6393x_serdes_fix_2500basex_an(struct mv88e6xxx_chip *chip,
int lane, u8 cmode, bool on)
{
u16 reg;
int err; int err;
err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT0_LANE); if (cmode != MV88E6XXX_PORT_STS_CMODE_2500BASEX)
return 0;
/* Inband AN is broken on Amethyst in 2500base-x mode when set by
* standard mechanism (via cmode).
* We can get around this by configuring the PCS mode to 1000base-x
* and then writing value 0x58 to register 1e.8000. (This must be done
* while SerDes receiver and transmitter are disabled, which is, when
* this function is called.)
* It seem that when we do this configuration to 2500base-x mode (by
* changing PCS mode to 1000base-x and frequency to 3.125 GHz from
* 1.25 GHz) and then configure to sgmii or 1000base-x, the device
* thinks that it already has SerDes at 1.25 GHz and does not change
* the 1e.8000 register, leaving SerDes at 3.125 GHz.
* To avoid this, change PCS mode back to 2500base-x when disabling
* SerDes from 2500base-x mode.
*/
err = mv88e6390_serdes_read(chip, lane, MDIO_MMD_PHYXS,
MV88E6393X_SERDES_POC, &reg);
if (err) if (err)
return err; return err;
err = mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT9_LANE); reg &= ~(MV88E6393X_SERDES_POC_PCS_MASK | MV88E6393X_SERDES_POC_AN);
if (on)
reg |= MV88E6393X_SERDES_POC_PCS_1000BASEX |
MV88E6393X_SERDES_POC_AN;
else
reg |= MV88E6393X_SERDES_POC_PCS_2500BASEX;
reg |= MV88E6393X_SERDES_POC_RESET;
err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_PHYXS,
MV88E6393X_SERDES_POC, reg);
if (err) if (err)
return err; return err;
return mv88e6393x_serdes_port_errata(chip, MV88E6393X_PORT10_LANE); err = mv88e6390_serdes_write(chip, lane, MDIO_MMD_VEND1, 0x8000, 0x58);
if (err)
return err;
return 0;
} }
int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane, int mv88e6393x_serdes_power(struct mv88e6xxx_chip *chip, int port, int lane,
bool on) bool on)
{ {
u8 cmode = chip->ports[port].cmode; u8 cmode = chip->ports[port].cmode;
int err;
if (port != 0 && port != 9 && port != 10) if (port != 0 && port != 9 && port != 10)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (on) {
err = mv88e6393x_serdes_erratum_4_8(chip, lane);
if (err)
return err;
err = mv88e6393x_serdes_erratum_5_2(chip, lane, cmode);
if (err)
return err;
err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,
true);
if (err)
return err;
err = mv88e6393x_serdes_power_lane(chip, lane, true);
if (err)
return err;
}
switch (cmode) { switch (cmode) {
case MV88E6XXX_PORT_STS_CMODE_SGMII: case MV88E6XXX_PORT_STS_CMODE_SGMII:
case MV88E6XXX_PORT_STS_CMODE_1000BASEX: case MV88E6XXX_PORT_STS_CMODE_1000BASEX:
case MV88E6XXX_PORT_STS_CMODE_2500BASEX: case MV88E6XXX_PORT_STS_CMODE_2500BASEX:
return mv88e6390_serdes_power_sgmii(chip, lane, on); err = mv88e6390_serdes_power_sgmii(chip, lane, on);
break;
case MV88E6393X_PORT_STS_CMODE_5GBASER: case MV88E6393X_PORT_STS_CMODE_5GBASER:
case MV88E6393X_PORT_STS_CMODE_10GBASER: case MV88E6393X_PORT_STS_CMODE_10GBASER:
return mv88e6390_serdes_power_10g(chip, lane, on); err = mv88e6390_serdes_power_10g(chip, lane, on);
break;
} }
return 0; if (err)
return err;
if (!on) {
err = mv88e6393x_serdes_power_lane(chip, lane, false);
if (err)
return err;
err = mv88e6393x_serdes_fix_2500basex_an(chip, lane, cmode,
false);
}
return err;
} }

View file

@ -93,6 +93,10 @@
#define MV88E6393X_SERDES_POC_PCS_MASK 0x0007 #define MV88E6393X_SERDES_POC_PCS_MASK 0x0007
#define MV88E6393X_SERDES_POC_RESET BIT(15) #define MV88E6393X_SERDES_POC_RESET BIT(15)
#define MV88E6393X_SERDES_POC_PDOWN BIT(5) #define MV88E6393X_SERDES_POC_PDOWN BIT(5)
#define MV88E6393X_SERDES_POC_AN BIT(3)
#define MV88E6393X_SERDES_CTRL1 0xf003
#define MV88E6393X_SERDES_CTRL1_TX_PDOWN BIT(9)
#define MV88E6393X_SERDES_CTRL1_RX_PDOWN BIT(8)
#define MV88E6393X_ERRATA_4_8_REG 0xF074 #define MV88E6393X_ERRATA_4_8_REG 0xF074
#define MV88E6393X_ERRATA_4_8_BIT BIT(14) #define MV88E6393X_ERRATA_4_8_BIT BIT(14)

View file

@ -107,6 +107,7 @@
#define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC 2112 #define RTL8365MB_LEARN_LIMIT_MAX_8365MB_VC 2112
/* Family-specific data and limits */ /* Family-specific data and limits */
#define RTL8365MB_PHYADDRMAX 7
#define RTL8365MB_NUM_PHYREGS 32 #define RTL8365MB_NUM_PHYREGS 32
#define RTL8365MB_PHYREGMAX (RTL8365MB_NUM_PHYREGS - 1) #define RTL8365MB_PHYREGMAX (RTL8365MB_NUM_PHYREGS - 1)
#define RTL8365MB_MAX_NUM_PORTS (RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1) #define RTL8365MB_MAX_NUM_PORTS (RTL8365MB_CPU_PORT_NUM_8365MB_VC + 1)
@ -176,7 +177,7 @@
#define RTL8365MB_INDIRECT_ACCESS_STATUS_REG 0x1F01 #define RTL8365MB_INDIRECT_ACCESS_STATUS_REG 0x1F01
#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG 0x1F02 #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_REG 0x1F02
#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK GENMASK(4, 0) #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_5_1_MASK GENMASK(4, 0)
#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(6, 5) #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_PHYNUM_MASK GENMASK(7, 5)
#define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK GENMASK(11, 8) #define RTL8365MB_INDIRECT_ACCESS_ADDRESS_OCPADR_9_6_MASK GENMASK(11, 8)
#define RTL8365MB_PHY_BASE 0x2000 #define RTL8365MB_PHY_BASE 0x2000
#define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG 0x1F03 #define RTL8365MB_INDIRECT_ACCESS_WRITE_DATA_REG 0x1F03
@ -679,6 +680,9 @@ static int rtl8365mb_phy_read(struct realtek_smi *smi, int phy, int regnum)
u16 val; u16 val;
int ret; int ret;
if (phy > RTL8365MB_PHYADDRMAX)
return -EINVAL;
if (regnum > RTL8365MB_PHYREGMAX) if (regnum > RTL8365MB_PHYREGMAX)
return -EINVAL; return -EINVAL;
@ -704,6 +708,9 @@ static int rtl8365mb_phy_write(struct realtek_smi *smi, int phy, int regnum,
u32 ocp_addr; u32 ocp_addr;
int ret; int ret;
if (phy > RTL8365MB_PHYADDRMAX)
return -EINVAL;
if (regnum > RTL8365MB_PHYREGMAX) if (regnum > RTL8365MB_PHYREGMAX)
return -EINVAL; return -EINVAL;

View file

@ -40,10 +40,12 @@
#define AQ_DEVICE_ID_AQC113DEV 0x00C0 #define AQ_DEVICE_ID_AQC113DEV 0x00C0
#define AQ_DEVICE_ID_AQC113CS 0x94C0 #define AQ_DEVICE_ID_AQC113CS 0x94C0
#define AQ_DEVICE_ID_AQC113CA 0x34C0
#define AQ_DEVICE_ID_AQC114CS 0x93C0 #define AQ_DEVICE_ID_AQC114CS 0x93C0
#define AQ_DEVICE_ID_AQC113 0x04C0 #define AQ_DEVICE_ID_AQC113 0x04C0
#define AQ_DEVICE_ID_AQC113C 0x14C0 #define AQ_DEVICE_ID_AQC113C 0x14C0
#define AQ_DEVICE_ID_AQC115C 0x12C0 #define AQ_DEVICE_ID_AQC115C 0x12C0
#define AQ_DEVICE_ID_AQC116C 0x11C0
#define HW_ATL_NIC_NAME "Marvell (aQuantia) AQtion 10Gbit Network Adapter" #define HW_ATL_NIC_NAME "Marvell (aQuantia) AQtion 10Gbit Network Adapter"
@ -53,20 +55,19 @@
#define AQ_NIC_RATE_10G BIT(0) #define AQ_NIC_RATE_10G BIT(0)
#define AQ_NIC_RATE_5G BIT(1) #define AQ_NIC_RATE_5G BIT(1)
#define AQ_NIC_RATE_5GSR BIT(2) #define AQ_NIC_RATE_2G5 BIT(2)
#define AQ_NIC_RATE_2G5 BIT(3) #define AQ_NIC_RATE_1G BIT(3)
#define AQ_NIC_RATE_1G BIT(4) #define AQ_NIC_RATE_100M BIT(4)
#define AQ_NIC_RATE_100M BIT(5) #define AQ_NIC_RATE_10M BIT(5)
#define AQ_NIC_RATE_10M BIT(6) #define AQ_NIC_RATE_1G_HALF BIT(6)
#define AQ_NIC_RATE_1G_HALF BIT(7) #define AQ_NIC_RATE_100M_HALF BIT(7)
#define AQ_NIC_RATE_100M_HALF BIT(8) #define AQ_NIC_RATE_10M_HALF BIT(8)
#define AQ_NIC_RATE_10M_HALF BIT(9)
#define AQ_NIC_RATE_EEE_10G BIT(10) #define AQ_NIC_RATE_EEE_10G BIT(9)
#define AQ_NIC_RATE_EEE_5G BIT(11) #define AQ_NIC_RATE_EEE_5G BIT(10)
#define AQ_NIC_RATE_EEE_2G5 BIT(12) #define AQ_NIC_RATE_EEE_2G5 BIT(11)
#define AQ_NIC_RATE_EEE_1G BIT(13) #define AQ_NIC_RATE_EEE_1G BIT(12)
#define AQ_NIC_RATE_EEE_100M BIT(14) #define AQ_NIC_RATE_EEE_100M BIT(13)
#define AQ_NIC_RATE_EEE_MSK (AQ_NIC_RATE_EEE_10G |\ #define AQ_NIC_RATE_EEE_MSK (AQ_NIC_RATE_EEE_10G |\
AQ_NIC_RATE_EEE_5G |\ AQ_NIC_RATE_EEE_5G |\
AQ_NIC_RATE_EEE_2G5 |\ AQ_NIC_RATE_EEE_2G5 |\

View file

@ -80,6 +80,8 @@ struct aq_hw_link_status_s {
}; };
struct aq_stats_s { struct aq_stats_s {
u64 brc;
u64 btc;
u64 uprc; u64 uprc;
u64 mprc; u64 mprc;
u64 bprc; u64 bprc;

View file

@ -316,18 +316,22 @@ int aq_nic_ndev_register(struct aq_nic_s *self)
aq_macsec_init(self); aq_macsec_init(self);
#endif #endif
mutex_lock(&self->fwreq_mutex); if (platform_get_ethdev_address(&self->pdev->dev, self->ndev) != 0) {
err = self->aq_fw_ops->get_mac_permanent(self->aq_hw, addr); // If DT has none or an invalid one, ask device for MAC address
mutex_unlock(&self->fwreq_mutex); mutex_lock(&self->fwreq_mutex);
if (err) err = self->aq_fw_ops->get_mac_permanent(self->aq_hw, addr);
goto err_exit; mutex_unlock(&self->fwreq_mutex);
eth_hw_addr_set(self->ndev, addr); if (err)
goto err_exit;
if (!is_valid_ether_addr(self->ndev->dev_addr) || if (is_valid_ether_addr(addr) &&
!aq_nic_is_valid_ether_addr(self->ndev->dev_addr)) { aq_nic_is_valid_ether_addr(addr)) {
netdev_warn(self->ndev, "MAC is invalid, will use random."); eth_hw_addr_set(self->ndev, addr);
eth_hw_addr_random(self->ndev); } else {
netdev_warn(self->ndev, "MAC is invalid, will use random.");
eth_hw_addr_random(self->ndev);
}
} }
#if defined(AQ_CFG_MAC_ADDR_PERMANENT) #if defined(AQ_CFG_MAC_ADDR_PERMANENT)
@ -905,8 +909,14 @@ u64 *aq_nic_get_stats(struct aq_nic_s *self, u64 *data)
data[++i] = stats->mbtc; data[++i] = stats->mbtc;
data[++i] = stats->bbrc; data[++i] = stats->bbrc;
data[++i] = stats->bbtc; data[++i] = stats->bbtc;
data[++i] = stats->ubrc + stats->mbrc + stats->bbrc; if (stats->brc)
data[++i] = stats->ubtc + stats->mbtc + stats->bbtc; data[++i] = stats->brc;
else
data[++i] = stats->ubrc + stats->mbrc + stats->bbrc;
if (stats->btc)
data[++i] = stats->btc;
else
data[++i] = stats->ubtc + stats->mbtc + stats->bbtc;
data[++i] = stats->dma_pkt_rc; data[++i] = stats->dma_pkt_rc;
data[++i] = stats->dma_pkt_tc; data[++i] = stats->dma_pkt_tc;
data[++i] = stats->dma_oct_rc; data[++i] = stats->dma_oct_rc;

View file

@ -49,6 +49,8 @@ static const struct pci_device_id aq_pci_tbl[] = {
{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113), }, { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113), },
{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113C), }, { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113C), },
{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC115C), }, { PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC115C), },
{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC113CA), },
{ PCI_VDEVICE(AQUANTIA, AQ_DEVICE_ID_AQC116C), },
{} {}
}; };
@ -85,7 +87,10 @@ static const struct aq_board_revision_s hw_atl_boards[] = {
{ AQ_DEVICE_ID_AQC113CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, { AQ_DEVICE_ID_AQC113CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, },
{ AQ_DEVICE_ID_AQC114CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, { AQ_DEVICE_ID_AQC114CS, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, },
{ AQ_DEVICE_ID_AQC113C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, { AQ_DEVICE_ID_AQC113C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, },
{ AQ_DEVICE_ID_AQC115C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, }, { AQ_DEVICE_ID_AQC115C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc115c, },
{ AQ_DEVICE_ID_AQC113CA, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc113, },
{ AQ_DEVICE_ID_AQC116C, AQ_HWREV_ANY, &hw_atl2_ops, &hw_atl2_caps_aqc116c, },
}; };
MODULE_DEVICE_TABLE(pci, aq_pci_tbl); MODULE_DEVICE_TABLE(pci, aq_pci_tbl);

View file

@ -362,9 +362,6 @@ unsigned int aq_vec_get_sw_stats(struct aq_vec_s *self, const unsigned int tc, u
{ {
unsigned int count; unsigned int count;
WARN_ONCE(!aq_vec_is_valid_tc(self, tc),
"Invalid tc %u (#rx=%u, #tx=%u)\n",
tc, self->rx_rings, self->tx_rings);
if (!aq_vec_is_valid_tc(self, tc)) if (!aq_vec_is_valid_tc(self, tc))
return 0; return 0;

View file

@ -867,12 +867,20 @@ static int hw_atl_fw1x_deinit(struct aq_hw_s *self)
int hw_atl_utils_update_stats(struct aq_hw_s *self) int hw_atl_utils_update_stats(struct aq_hw_s *self)
{ {
struct aq_stats_s *cs = &self->curr_stats; struct aq_stats_s *cs = &self->curr_stats;
struct aq_stats_s curr_stats = *cs;
struct hw_atl_utils_mbox mbox; struct hw_atl_utils_mbox mbox;
bool corrupted_stats = false;
hw_atl_utils_mpi_read_stats(self, &mbox); hw_atl_utils_mpi_read_stats(self, &mbox);
#define AQ_SDELTA(_N_) (self->curr_stats._N_ += \ #define AQ_SDELTA(_N_) \
mbox.stats._N_ - self->last_stats._N_) do { \
if (!corrupted_stats && \
((s64)(mbox.stats._N_ - self->last_stats._N_)) >= 0) \
curr_stats._N_ += mbox.stats._N_ - self->last_stats._N_; \
else \
corrupted_stats = true; \
} while (0)
if (self->aq_link_status.mbps) { if (self->aq_link_status.mbps) {
AQ_SDELTA(uprc); AQ_SDELTA(uprc);
@ -892,6 +900,9 @@ int hw_atl_utils_update_stats(struct aq_hw_s *self)
AQ_SDELTA(bbrc); AQ_SDELTA(bbrc);
AQ_SDELTA(bbtc); AQ_SDELTA(bbtc);
AQ_SDELTA(dpc); AQ_SDELTA(dpc);
if (!corrupted_stats)
*cs = curr_stats;
} }
#undef AQ_SDELTA #undef AQ_SDELTA

View file

@ -132,9 +132,6 @@ static enum hw_atl_fw2x_rate link_speed_mask_2fw2x_ratemask(u32 speed)
if (speed & AQ_NIC_RATE_5G) if (speed & AQ_NIC_RATE_5G)
rate |= FW2X_RATE_5G; rate |= FW2X_RATE_5G;
if (speed & AQ_NIC_RATE_5GSR)
rate |= FW2X_RATE_5G;
if (speed & AQ_NIC_RATE_2G5) if (speed & AQ_NIC_RATE_2G5)
rate |= FW2X_RATE_2G5; rate |= FW2X_RATE_2G5;

View file

@ -65,11 +65,25 @@ const struct aq_hw_caps_s hw_atl2_caps_aqc113 = {
AQ_NIC_RATE_5G | AQ_NIC_RATE_5G |
AQ_NIC_RATE_2G5 | AQ_NIC_RATE_2G5 |
AQ_NIC_RATE_1G | AQ_NIC_RATE_1G |
AQ_NIC_RATE_1G_HALF |
AQ_NIC_RATE_100M | AQ_NIC_RATE_100M |
AQ_NIC_RATE_100M_HALF | AQ_NIC_RATE_10M,
AQ_NIC_RATE_10M | };
AQ_NIC_RATE_10M_HALF,
const struct aq_hw_caps_s hw_atl2_caps_aqc115c = {
DEFAULT_BOARD_BASIC_CAPABILITIES,
.media_type = AQ_HW_MEDIA_TYPE_TP,
.link_speed_msk = AQ_NIC_RATE_2G5 |
AQ_NIC_RATE_1G |
AQ_NIC_RATE_100M |
AQ_NIC_RATE_10M,
};
const struct aq_hw_caps_s hw_atl2_caps_aqc116c = {
DEFAULT_BOARD_BASIC_CAPABILITIES,
.media_type = AQ_HW_MEDIA_TYPE_TP,
.link_speed_msk = AQ_NIC_RATE_1G |
AQ_NIC_RATE_100M |
AQ_NIC_RATE_10M,
}; };
static u32 hw_atl2_sem_act_rslvr_get(struct aq_hw_s *self) static u32 hw_atl2_sem_act_rslvr_get(struct aq_hw_s *self)

View file

@ -9,6 +9,8 @@
#include "aq_common.h" #include "aq_common.h"
extern const struct aq_hw_caps_s hw_atl2_caps_aqc113; extern const struct aq_hw_caps_s hw_atl2_caps_aqc113;
extern const struct aq_hw_caps_s hw_atl2_caps_aqc115c;
extern const struct aq_hw_caps_s hw_atl2_caps_aqc116c;
extern const struct aq_hw_ops hw_atl2_ops; extern const struct aq_hw_ops hw_atl2_ops;
#endif /* HW_ATL2_H */ #endif /* HW_ATL2_H */

View file

@ -239,7 +239,8 @@ struct version_s {
u8 minor; u8 minor;
u16 build; u16 build;
} phy; } phy;
u32 rsvd; u32 drv_iface_ver:4;
u32 rsvd:28;
}; };
struct link_status_s { struct link_status_s {
@ -424,7 +425,7 @@ struct cable_diag_status_s {
u16 rsvd2; u16 rsvd2;
}; };
struct statistics_s { struct statistics_a0_s {
struct { struct {
u32 link_up; u32 link_up;
u32 link_down; u32 link_down;
@ -457,6 +458,33 @@ struct statistics_s {
u32 reserve_fw_gap; u32 reserve_fw_gap;
}; };
struct __packed statistics_b0_s {
u64 rx_good_octets;
u64 rx_pause_frames;
u64 rx_good_frames;
u64 rx_errors;
u64 rx_unicast_frames;
u64 rx_multicast_frames;
u64 rx_broadcast_frames;
u64 tx_good_octets;
u64 tx_pause_frames;
u64 tx_good_frames;
u64 tx_errors;
u64 tx_unicast_frames;
u64 tx_multicast_frames;
u64 tx_broadcast_frames;
u32 main_loop_cycles;
};
struct __packed statistics_s {
union __packed {
struct statistics_a0_s a0;
struct statistics_b0_s b0;
};
};
struct filter_caps_s { struct filter_caps_s {
u8 l2_filters_base_index:6; u8 l2_filters_base_index:6;
u8 flexible_filter_mask:2; u8 flexible_filter_mask:2;
@ -545,7 +573,7 @@ struct management_status_s {
u32 rsvd5; u32 rsvd5;
}; };
struct fw_interface_out { struct __packed fw_interface_out {
struct transaction_counter_s transaction_id; struct transaction_counter_s transaction_id;
struct version_s version; struct version_s version;
struct link_status_s link_status; struct link_status_s link_status;
@ -569,7 +597,6 @@ struct fw_interface_out {
struct core_dump_s core_dump; struct core_dump_s core_dump;
u32 rsvd11; u32 rsvd11;
struct statistics_s stats; struct statistics_s stats;
u32 rsvd12;
struct filter_caps_s filter_caps; struct filter_caps_s filter_caps;
struct device_caps_s device_caps; struct device_caps_s device_caps;
u32 rsvd13; u32 rsvd13;
@ -592,6 +619,9 @@ struct fw_interface_out {
#define AQ_HOST_MODE_LOW_POWER 3U #define AQ_HOST_MODE_LOW_POWER 3U
#define AQ_HOST_MODE_SHUTDOWN 4U #define AQ_HOST_MODE_SHUTDOWN 4U
#define AQ_A2_FW_INTERFACE_A0 0
#define AQ_A2_FW_INTERFACE_B0 1
int hw_atl2_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops); int hw_atl2_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops);
int hw_atl2_utils_soft_reset(struct aq_hw_s *self); int hw_atl2_utils_soft_reset(struct aq_hw_s *self);

View file

@ -84,7 +84,7 @@ static int hw_atl2_shared_buffer_read_block(struct aq_hw_s *self,
if (cnt > AQ_A2_FW_READ_TRY_MAX) if (cnt > AQ_A2_FW_READ_TRY_MAX)
return -ETIME; return -ETIME;
if (tid1.transaction_cnt_a != tid1.transaction_cnt_b) if (tid1.transaction_cnt_a != tid1.transaction_cnt_b)
udelay(1); mdelay(1);
} while (tid1.transaction_cnt_a != tid1.transaction_cnt_b); } while (tid1.transaction_cnt_a != tid1.transaction_cnt_b);
hw_atl2_mif_shared_buf_read(self, offset, (u32 *)data, dwords); hw_atl2_mif_shared_buf_read(self, offset, (u32 *)data, dwords);
@ -154,7 +154,7 @@ static void a2_link_speed_mask2fw(u32 speed,
{ {
link_options->rate_10G = !!(speed & AQ_NIC_RATE_10G); link_options->rate_10G = !!(speed & AQ_NIC_RATE_10G);
link_options->rate_5G = !!(speed & AQ_NIC_RATE_5G); link_options->rate_5G = !!(speed & AQ_NIC_RATE_5G);
link_options->rate_N5G = !!(speed & AQ_NIC_RATE_5GSR); link_options->rate_N5G = link_options->rate_5G;
link_options->rate_2P5G = !!(speed & AQ_NIC_RATE_2G5); link_options->rate_2P5G = !!(speed & AQ_NIC_RATE_2G5);
link_options->rate_N2P5G = link_options->rate_2P5G; link_options->rate_N2P5G = link_options->rate_2P5G;
link_options->rate_1G = !!(speed & AQ_NIC_RATE_1G); link_options->rate_1G = !!(speed & AQ_NIC_RATE_1G);
@ -192,8 +192,6 @@ static u32 a2_fw_lkp_to_mask(struct lkp_link_caps_s *lkp_link_caps)
rate |= AQ_NIC_RATE_10G; rate |= AQ_NIC_RATE_10G;
if (lkp_link_caps->rate_5G) if (lkp_link_caps->rate_5G)
rate |= AQ_NIC_RATE_5G; rate |= AQ_NIC_RATE_5G;
if (lkp_link_caps->rate_N5G)
rate |= AQ_NIC_RATE_5GSR;
if (lkp_link_caps->rate_2P5G) if (lkp_link_caps->rate_2P5G)
rate |= AQ_NIC_RATE_2G5; rate |= AQ_NIC_RATE_2G5;
if (lkp_link_caps->rate_1G) if (lkp_link_caps->rate_1G)
@ -335,15 +333,22 @@ static int aq_a2_fw_get_mac_permanent(struct aq_hw_s *self, u8 *mac)
return 0; return 0;
} }
static int aq_a2_fw_update_stats(struct aq_hw_s *self) static void aq_a2_fill_a0_stats(struct aq_hw_s *self,
struct statistics_s *stats)
{ {
struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv; struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
struct statistics_s stats; struct aq_stats_s *cs = &self->curr_stats;
struct aq_stats_s curr_stats = *cs;
bool corrupted_stats = false;
hw_atl2_shared_buffer_read_safe(self, stats, &stats); #define AQ_SDELTA(_N, _F) \
do { \
#define AQ_SDELTA(_N_, _F_) (self->curr_stats._N_ += \ if (!corrupted_stats && \
stats.msm._F_ - priv->last_stats.msm._F_) ((s64)(stats->a0.msm._F - priv->last_stats.a0.msm._F)) >= 0) \
curr_stats._N += stats->a0.msm._F - priv->last_stats.a0.msm._F;\
else \
corrupted_stats = true; \
} while (0)
if (self->aq_link_status.mbps) { if (self->aq_link_status.mbps) {
AQ_SDELTA(uprc, rx_unicast_frames); AQ_SDELTA(uprc, rx_unicast_frames);
@ -362,17 +367,76 @@ static int aq_a2_fw_update_stats(struct aq_hw_s *self)
AQ_SDELTA(mbtc, tx_multicast_octets); AQ_SDELTA(mbtc, tx_multicast_octets);
AQ_SDELTA(bbrc, rx_broadcast_octets); AQ_SDELTA(bbrc, rx_broadcast_octets);
AQ_SDELTA(bbtc, tx_broadcast_octets); AQ_SDELTA(bbtc, tx_broadcast_octets);
if (!corrupted_stats)
*cs = curr_stats;
} }
#undef AQ_SDELTA #undef AQ_SDELTA
self->curr_stats.dma_pkt_rc =
hw_atl_stats_rx_dma_good_pkt_counter_get(self); }
self->curr_stats.dma_pkt_tc =
hw_atl_stats_tx_dma_good_pkt_counter_get(self); static void aq_a2_fill_b0_stats(struct aq_hw_s *self,
self->curr_stats.dma_oct_rc = struct statistics_s *stats)
hw_atl_stats_rx_dma_good_octet_counter_get(self); {
self->curr_stats.dma_oct_tc = struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
hw_atl_stats_tx_dma_good_octet_counter_get(self); struct aq_stats_s *cs = &self->curr_stats;
self->curr_stats.dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self); struct aq_stats_s curr_stats = *cs;
bool corrupted_stats = false;
#define AQ_SDELTA(_N, _F) \
do { \
if (!corrupted_stats && \
((s64)(stats->b0._F - priv->last_stats.b0._F)) >= 0) \
curr_stats._N += stats->b0._F - priv->last_stats.b0._F; \
else \
corrupted_stats = true; \
} while (0)
if (self->aq_link_status.mbps) {
AQ_SDELTA(uprc, rx_unicast_frames);
AQ_SDELTA(mprc, rx_multicast_frames);
AQ_SDELTA(bprc, rx_broadcast_frames);
AQ_SDELTA(erpr, rx_errors);
AQ_SDELTA(brc, rx_good_octets);
AQ_SDELTA(uptc, tx_unicast_frames);
AQ_SDELTA(mptc, tx_multicast_frames);
AQ_SDELTA(bptc, tx_broadcast_frames);
AQ_SDELTA(erpt, tx_errors);
AQ_SDELTA(btc, tx_good_octets);
if (!corrupted_stats)
*cs = curr_stats;
}
#undef AQ_SDELTA
}
static int aq_a2_fw_update_stats(struct aq_hw_s *self)
{
struct hw_atl2_priv *priv = (struct hw_atl2_priv *)self->priv;
struct aq_stats_s *cs = &self->curr_stats;
struct statistics_s stats;
struct version_s version;
int err;
err = hw_atl2_shared_buffer_read_safe(self, version, &version);
if (err)
return err;
err = hw_atl2_shared_buffer_read_safe(self, stats, &stats);
if (err)
return err;
if (version.drv_iface_ver == AQ_A2_FW_INTERFACE_A0)
aq_a2_fill_a0_stats(self, &stats);
else
aq_a2_fill_b0_stats(self, &stats);
cs->dma_pkt_rc = hw_atl_stats_rx_dma_good_pkt_counter_get(self);
cs->dma_pkt_tc = hw_atl_stats_tx_dma_good_pkt_counter_get(self);
cs->dma_oct_rc = hw_atl_stats_rx_dma_good_octet_counter_get(self);
cs->dma_oct_tc = hw_atl_stats_tx_dma_good_octet_counter_get(self);
cs->dpc = hw_atl_rpb_rx_dma_drop_pkt_cnt_get(self);
memcpy(&priv->last_stats, &stats, sizeof(stats)); memcpy(&priv->last_stats, &stats, sizeof(stats));
@ -499,9 +563,9 @@ u32 hw_atl2_utils_get_fw_version(struct aq_hw_s *self)
hw_atl2_shared_buffer_read_safe(self, version, &version); hw_atl2_shared_buffer_read_safe(self, version, &version);
/* A2 FW version is stored in reverse order */ /* A2 FW version is stored in reverse order */
return version.mac.major << 24 | return version.bundle.major << 24 |
version.mac.minor << 16 | version.bundle.minor << 16 |
version.mac.build; version.bundle.build;
} }
int hw_atl2_utils_get_action_resolve_table_caps(struct aq_hw_s *self, int hw_atl2_utils_get_action_resolve_table_caps(struct aq_hw_s *self,

View file

@ -4550,6 +4550,8 @@ static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
fsl_mc_portal_free(priv->mc_io); fsl_mc_portal_free(priv->mc_io);
destroy_workqueue(priv->dpaa2_ptp_wq);
dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name); dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
free_netdev(net_dev); free_netdev(net_dev);

View file

@ -628,17 +628,9 @@ static bool reuse_rx_pools(struct ibmvnic_adapter *adapter)
old_buff_size = adapter->prev_rx_buf_sz; old_buff_size = adapter->prev_rx_buf_sz;
new_buff_size = adapter->cur_rx_buf_sz; new_buff_size = adapter->cur_rx_buf_sz;
/* Require buff size to be exactly same for now */ if (old_buff_size != new_buff_size ||
if (old_buff_size != new_buff_size) old_num_pools != new_num_pools ||
return false; old_pool_size != new_pool_size)
if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
return true;
if (old_num_pools < adapter->min_rx_queues ||
old_num_pools > adapter->max_rx_queues ||
old_pool_size < adapter->min_rx_add_entries_per_subcrq ||
old_pool_size > adapter->max_rx_add_entries_per_subcrq)
return false; return false;
return true; return true;
@ -874,17 +866,9 @@ static bool reuse_tx_pools(struct ibmvnic_adapter *adapter)
old_mtu = adapter->prev_mtu; old_mtu = adapter->prev_mtu;
new_mtu = adapter->req_mtu; new_mtu = adapter->req_mtu;
/* Require MTU to be exactly same to reuse pools for now */ if (old_mtu != new_mtu ||
if (old_mtu != new_mtu) old_num_pools != new_num_pools ||
return false; old_pool_size != new_pool_size)
if (old_num_pools == new_num_pools && old_pool_size == new_pool_size)
return true;
if (old_num_pools < adapter->min_tx_queues ||
old_num_pools > adapter->max_tx_queues ||
old_pool_size < adapter->min_tx_entries_per_subcrq ||
old_pool_size > adapter->max_tx_entries_per_subcrq)
return false; return false;
return true; return true;

View file

@ -383,6 +383,7 @@ bool ice_alloc_rx_bufs_zc(struct ice_rx_ring *rx_ring, u16 count)
while (i--) { while (i--) {
dma = xsk_buff_xdp_get_dma(*xdp); dma = xsk_buff_xdp_get_dma(*xdp);
rx_desc->read.pkt_addr = cpu_to_le64(dma); rx_desc->read.pkt_addr = cpu_to_le64(dma);
rx_desc->wb.status_error0 = 0;
rx_desc++; rx_desc++;
xdp++; xdp++;

View file

@ -7458,7 +7458,7 @@ static int mvpp2_probe(struct platform_device *pdev)
shared = num_present_cpus() - priv->nthreads; shared = num_present_cpus() - priv->nthreads;
if (shared > 0) if (shared > 0)
bitmap_fill(&priv->lock_map, bitmap_set(&priv->lock_map, 0,
min_t(int, shared, MVPP2_MAX_THREADS)); min_t(int, shared, MVPP2_MAX_THREADS));
for (i = 0; i < MVPP2_MAX_THREADS; i++) { for (i = 0; i < MVPP2_MAX_THREADS; i++) {

View file

@ -2341,7 +2341,7 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
goto free_regions; goto free_regions;
break; break;
default: default:
return err; goto free_regions;
} }
mw->mbox_wq = alloc_workqueue(name, mw->mbox_wq = alloc_workqueue(name,

View file

@ -670,7 +670,7 @@ void __init mlx4_en_init_ptys2ethtool_map(void)
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_T, SPEED_1000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_T, SPEED_1000,
ETHTOOL_LINK_MODE_1000baseT_Full_BIT); ETHTOOL_LINK_MODE_1000baseT_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_CX_SGMII, SPEED_1000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_CX_SGMII, SPEED_1000,
ETHTOOL_LINK_MODE_1000baseKX_Full_BIT); ETHTOOL_LINK_MODE_1000baseX_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_KX, SPEED_1000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_1000BASE_KX, SPEED_1000,
ETHTOOL_LINK_MODE_1000baseKX_Full_BIT); ETHTOOL_LINK_MODE_1000baseKX_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_T, SPEED_10000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_T, SPEED_10000,
@ -682,9 +682,9 @@ void __init mlx4_en_init_ptys2ethtool_map(void)
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_KR, SPEED_10000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_KR, SPEED_10000,
ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); ETHTOOL_LINK_MODE_10000baseKR_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_CR, SPEED_10000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_CR, SPEED_10000,
ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); ETHTOOL_LINK_MODE_10000baseCR_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_SR, SPEED_10000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_10GBASE_SR, SPEED_10000,
ETHTOOL_LINK_MODE_10000baseKR_Full_BIT); ETHTOOL_LINK_MODE_10000baseSR_Full_BIT);
MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_20GBASE_KR2, SPEED_20000, MLX4_BUILD_PTYS2ETHTOOL_CONFIG(MLX4_20GBASE_KR2, SPEED_20000,
ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT, ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT,
ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT); ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT);

View file

@ -2286,9 +2286,14 @@ int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv,
bool carry_xdp_prog) bool carry_xdp_prog)
{ {
struct bpf_prog *xdp_prog; struct bpf_prog *xdp_prog;
int i, t; int i, t, ret;
mlx4_en_copy_priv(tmp, priv, prof); ret = mlx4_en_copy_priv(tmp, priv, prof);
if (ret) {
en_warn(priv, "%s: mlx4_en_copy_priv() failed, return\n",
__func__);
return ret;
}
if (mlx4_en_alloc_resources(tmp)) { if (mlx4_en_alloc_resources(tmp)) {
en_warn(priv, en_warn(priv,

View file

@ -341,6 +341,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_DEALLOC_SF: case MLX5_CMD_OP_DEALLOC_SF:
case MLX5_CMD_OP_DESTROY_UCTX: case MLX5_CMD_OP_DESTROY_UCTX:
case MLX5_CMD_OP_DESTROY_UMEM: case MLX5_CMD_OP_DESTROY_UMEM:
case MLX5_CMD_OP_MODIFY_RQT:
return MLX5_CMD_STAT_OK; return MLX5_CMD_STAT_OK;
case MLX5_CMD_OP_QUERY_HCA_CAP: case MLX5_CMD_OP_QUERY_HCA_CAP:
@ -446,7 +447,6 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_MODIFY_TIS: case MLX5_CMD_OP_MODIFY_TIS:
case MLX5_CMD_OP_QUERY_TIS: case MLX5_CMD_OP_QUERY_TIS:
case MLX5_CMD_OP_CREATE_RQT: case MLX5_CMD_OP_CREATE_RQT:
case MLX5_CMD_OP_MODIFY_RQT:
case MLX5_CMD_OP_QUERY_RQT: case MLX5_CMD_OP_QUERY_RQT:
case MLX5_CMD_OP_CREATE_FLOW_TABLE: case MLX5_CMD_OP_CREATE_FLOW_TABLE:

View file

@ -13,6 +13,9 @@ struct mlx5e_rx_res {
unsigned int max_nch; unsigned int max_nch;
u32 drop_rqn; u32 drop_rqn;
struct mlx5e_packet_merge_param pkt_merge_param;
struct rw_semaphore pkt_merge_param_sem;
struct mlx5e_rss *rss[MLX5E_MAX_NUM_RSS]; struct mlx5e_rss *rss[MLX5E_MAX_NUM_RSS];
bool rss_active; bool rss_active;
u32 rss_rqns[MLX5E_INDIR_RQT_SIZE]; u32 rss_rqns[MLX5E_INDIR_RQT_SIZE];
@ -392,6 +395,7 @@ static int mlx5e_rx_res_ptp_init(struct mlx5e_rx_res *res)
if (err) if (err)
goto out; goto out;
/* Separated from the channels RQs, does not share pkt_merge state with them */
mlx5e_tir_builder_build_rqt(builder, res->mdev->mlx5e_res.hw_objs.td.tdn, mlx5e_tir_builder_build_rqt(builder, res->mdev->mlx5e_res.hw_objs.td.tdn,
mlx5e_rqt_get_rqtn(&res->ptp.rqt), mlx5e_rqt_get_rqtn(&res->ptp.rqt),
inner_ft_support); inner_ft_support);
@ -447,6 +451,9 @@ int mlx5e_rx_res_init(struct mlx5e_rx_res *res, struct mlx5_core_dev *mdev,
res->max_nch = max_nch; res->max_nch = max_nch;
res->drop_rqn = drop_rqn; res->drop_rqn = drop_rqn;
res->pkt_merge_param = *init_pkt_merge_param;
init_rwsem(&res->pkt_merge_param_sem);
err = mlx5e_rx_res_rss_init_def(res, init_pkt_merge_param, init_nch); err = mlx5e_rx_res_rss_init_def(res, init_pkt_merge_param, init_nch);
if (err) if (err)
goto err_out; goto err_out;
@ -513,7 +520,7 @@ u32 mlx5e_rx_res_get_tirn_ptp(struct mlx5e_rx_res *res)
return mlx5e_tir_get_tirn(&res->ptp.tir); return mlx5e_tir_get_tirn(&res->ptp.tir);
} }
u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix) static u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix)
{ {
return mlx5e_rqt_get_rqtn(&res->channels[ix].direct_rqt); return mlx5e_rqt_get_rqtn(&res->channels[ix].direct_rqt);
} }
@ -656,6 +663,9 @@ int mlx5e_rx_res_packet_merge_set_param(struct mlx5e_rx_res *res,
if (!builder) if (!builder)
return -ENOMEM; return -ENOMEM;
down_write(&res->pkt_merge_param_sem);
res->pkt_merge_param = *pkt_merge_param;
mlx5e_tir_builder_build_packet_merge(builder, pkt_merge_param); mlx5e_tir_builder_build_packet_merge(builder, pkt_merge_param);
final_err = 0; final_err = 0;
@ -681,6 +691,7 @@ int mlx5e_rx_res_packet_merge_set_param(struct mlx5e_rx_res *res,
} }
} }
up_write(&res->pkt_merge_param_sem);
mlx5e_tir_builder_free(builder); mlx5e_tir_builder_free(builder);
return final_err; return final_err;
} }
@ -689,3 +700,31 @@ struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *
{ {
return mlx5e_rss_get_hash(res->rss[0]); return mlx5e_rss_get_hash(res->rss[0]);
} }
int mlx5e_rx_res_tls_tir_create(struct mlx5e_rx_res *res, unsigned int rxq,
struct mlx5e_tir *tir)
{
bool inner_ft_support = res->features & MLX5E_RX_RES_FEATURE_INNER_FT;
struct mlx5e_tir_builder *builder;
u32 rqtn;
int err;
builder = mlx5e_tir_builder_alloc(false);
if (!builder)
return -ENOMEM;
rqtn = mlx5e_rx_res_get_rqtn_direct(res, rxq);
mlx5e_tir_builder_build_rqt(builder, res->mdev->mlx5e_res.hw_objs.td.tdn, rqtn,
inner_ft_support);
mlx5e_tir_builder_build_direct(builder);
mlx5e_tir_builder_build_tls(builder);
down_read(&res->pkt_merge_param_sem);
mlx5e_tir_builder_build_packet_merge(builder, &res->pkt_merge_param);
err = mlx5e_tir_init(tir, builder, res->mdev, false);
up_read(&res->pkt_merge_param_sem);
mlx5e_tir_builder_free(builder);
return err;
}

View file

@ -37,9 +37,6 @@ u32 mlx5e_rx_res_get_tirn_rss(struct mlx5e_rx_res *res, enum mlx5_traffic_types
u32 mlx5e_rx_res_get_tirn_rss_inner(struct mlx5e_rx_res *res, enum mlx5_traffic_types tt); u32 mlx5e_rx_res_get_tirn_rss_inner(struct mlx5e_rx_res *res, enum mlx5_traffic_types tt);
u32 mlx5e_rx_res_get_tirn_ptp(struct mlx5e_rx_res *res); u32 mlx5e_rx_res_get_tirn_ptp(struct mlx5e_rx_res *res);
/* RQTN getters for modules that create their own TIRs */
u32 mlx5e_rx_res_get_rqtn_direct(struct mlx5e_rx_res *res, unsigned int ix);
/* Activate/deactivate API */ /* Activate/deactivate API */
void mlx5e_rx_res_channels_activate(struct mlx5e_rx_res *res, struct mlx5e_channels *chs); void mlx5e_rx_res_channels_activate(struct mlx5e_rx_res *res, struct mlx5e_channels *chs);
void mlx5e_rx_res_channels_deactivate(struct mlx5e_rx_res *res); void mlx5e_rx_res_channels_deactivate(struct mlx5e_rx_res *res);
@ -69,4 +66,7 @@ struct mlx5e_rss *mlx5e_rx_res_rss_get(struct mlx5e_rx_res *res, u32 rss_idx);
/* Workaround for hairpin */ /* Workaround for hairpin */
struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *res); struct mlx5e_rss_params_hash mlx5e_rx_res_get_current_hash(struct mlx5e_rx_res *res);
/* Accel TIRs */
int mlx5e_rx_res_tls_tir_create(struct mlx5e_rx_res *res, unsigned int rxq,
struct mlx5e_tir *tir);
#endif /* __MLX5_EN_RX_RES_H__ */ #endif /* __MLX5_EN_RX_RES_H__ */

View file

@ -191,7 +191,7 @@ static void mlx5e_ipsec_set_swp(struct sk_buff *skb,
eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2; eseg->swp_inner_l3_offset = skb_inner_network_offset(skb) / 2;
eseg->swp_inner_l4_offset = eseg->swp_inner_l4_offset =
(skb->csum_start + skb->head - skb->data) / 2; (skb->csum_start + skb->head - skb->data) / 2;
if (skb->protocol == htons(ETH_P_IPV6)) if (inner_ip_hdr(skb)->version == 6)
eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6; eseg->swp_flags |= MLX5_ETH_WQE_SWP_INNER_L3_IPV6;
break; break;
default: default:

View file

@ -100,25 +100,6 @@ mlx5e_ktls_rx_resync_create_resp_list(void)
return resp_list; return resp_list;
} }
static int mlx5e_ktls_create_tir(struct mlx5_core_dev *mdev, struct mlx5e_tir *tir, u32 rqtn)
{
struct mlx5e_tir_builder *builder;
int err;
builder = mlx5e_tir_builder_alloc(false);
if (!builder)
return -ENOMEM;
mlx5e_tir_builder_build_rqt(builder, mdev->mlx5e_res.hw_objs.td.tdn, rqtn, false);
mlx5e_tir_builder_build_direct(builder);
mlx5e_tir_builder_build_tls(builder);
err = mlx5e_tir_init(tir, builder, mdev, false);
mlx5e_tir_builder_free(builder);
return err;
}
static void accel_rule_handle_work(struct work_struct *work) static void accel_rule_handle_work(struct work_struct *work)
{ {
struct mlx5e_ktls_offload_context_rx *priv_rx; struct mlx5e_ktls_offload_context_rx *priv_rx;
@ -609,7 +590,6 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
struct mlx5_core_dev *mdev; struct mlx5_core_dev *mdev;
struct mlx5e_priv *priv; struct mlx5e_priv *priv;
int rxq, err; int rxq, err;
u32 rqtn;
tls_ctx = tls_get_ctx(sk); tls_ctx = tls_get_ctx(sk);
priv = netdev_priv(netdev); priv = netdev_priv(netdev);
@ -635,9 +615,7 @@ int mlx5e_ktls_add_rx(struct net_device *netdev, struct sock *sk,
priv_rx->sw_stats = &priv->tls->sw_stats; priv_rx->sw_stats = &priv->tls->sw_stats;
mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx); mlx5e_set_ktls_rx_priv_ctx(tls_ctx, priv_rx);
rqtn = mlx5e_rx_res_get_rqtn_direct(priv->rx_res, rxq); err = mlx5e_rx_res_tls_tir_create(priv->rx_res, rxq, &priv_rx->tir);
err = mlx5e_ktls_create_tir(mdev, &priv_rx->tir, rqtn);
if (err) if (err)
goto err_create_tir; goto err_create_tir;

View file

@ -1080,6 +1080,10 @@ static mlx5e_stats_grp_t mlx5e_ul_rep_stats_grps[] = {
&MLX5E_STATS_GRP(pme), &MLX5E_STATS_GRP(pme),
&MLX5E_STATS_GRP(channels), &MLX5E_STATS_GRP(channels),
&MLX5E_STATS_GRP(per_port_buff_congest), &MLX5E_STATS_GRP(per_port_buff_congest),
#ifdef CONFIG_MLX5_EN_IPSEC
&MLX5E_STATS_GRP(ipsec_sw),
&MLX5E_STATS_GRP(ipsec_hw),
#endif
}; };
static unsigned int mlx5e_ul_rep_stats_grps_num(struct mlx5e_priv *priv) static unsigned int mlx5e_ul_rep_stats_grps_num(struct mlx5e_priv *priv)

View file

@ -543,13 +543,13 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
u16 klm_entries, u16 index) u16 klm_entries, u16 index)
{ {
struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo; struct mlx5e_shampo_hd *shampo = rq->mpwqe.shampo;
u16 entries, pi, i, header_offset, err, wqe_bbs, new_entries; u16 entries, pi, header_offset, err, wqe_bbs, new_entries;
u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey; u32 lkey = rq->mdev->mlx5e_res.hw_objs.mkey;
struct page *page = shampo->last_page; struct page *page = shampo->last_page;
u64 addr = shampo->last_addr; u64 addr = shampo->last_addr;
struct mlx5e_dma_info *dma_info; struct mlx5e_dma_info *dma_info;
struct mlx5e_umr_wqe *umr_wqe; struct mlx5e_umr_wqe *umr_wqe;
int headroom; int headroom, i;
headroom = rq->buff.headroom; headroom = rq->buff.headroom;
new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_ALIGNMENT - 1)); new_entries = klm_entries - (shampo->pi & (MLX5_UMR_KLM_ALIGNMENT - 1));
@ -601,9 +601,7 @@ static int mlx5e_build_shampo_hd_umr(struct mlx5e_rq *rq,
err_unmap: err_unmap:
while (--i >= 0) { while (--i >= 0) {
if (--index < 0) dma_info = &shampo->info[--index];
index = shampo->hd_per_wq - 1;
dma_info = &shampo->info[index];
if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) { if (!(i & (MLX5E_SHAMPO_WQ_HEADER_PER_PAGE - 1))) {
dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE); dma_info->addr = ALIGN_DOWN(dma_info->addr, PAGE_SIZE);
mlx5e_page_release(rq, dma_info, true); mlx5e_page_release(rq, dma_info, true);

View file

@ -130,7 +130,7 @@ static u32 esw_qos_calculate_min_rate_divider(struct mlx5_eswitch *esw,
/* If vports min rate divider is 0 but their group has bw_share configured, then /* If vports min rate divider is 0 but their group has bw_share configured, then
* need to set bw_share for vports to minimal value. * need to set bw_share for vports to minimal value.
*/ */
if (!group_level && !max_guarantee && group->bw_share) if (!group_level && !max_guarantee && group && group->bw_share)
return 1; return 1;
return 0; return 0;
} }
@ -423,7 +423,7 @@ static int esw_qos_vport_update_group(struct mlx5_eswitch *esw,
return err; return err;
/* Recalculate bw share weights of old and new groups */ /* Recalculate bw share weights of old and new groups */
if (vport->qos.bw_share) { if (vport->qos.bw_share || new_group->bw_share) {
esw_qos_normalize_vports_min_rate(esw, curr_group, extack); esw_qos_normalize_vports_min_rate(esw, curr_group, extack);
esw_qos_normalize_vports_min_rate(esw, new_group, extack); esw_qos_normalize_vports_min_rate(esw, new_group, extack);
} }

View file

@ -329,14 +329,25 @@ static bool
esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr) esw_is_indir_table(struct mlx5_eswitch *esw, struct mlx5_flow_attr *attr)
{ {
struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr; struct mlx5_esw_flow_attr *esw_attr = attr->esw_attr;
bool result = false;
int i; int i;
for (i = esw_attr->split_count; i < esw_attr->out_count; i++) /* Indirect table is supported only for flows with in_port uplink
* and the destination is vport on the same eswitch as the uplink,
* return false in case at least one of destinations doesn't meet
* this criteria.
*/
for (i = esw_attr->split_count; i < esw_attr->out_count; i++) {
if (esw_attr->dests[i].rep && if (esw_attr->dests[i].rep &&
mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport, mlx5_esw_indir_table_needed(esw, attr, esw_attr->dests[i].rep->vport,
esw_attr->dests[i].mdev)) esw_attr->dests[i].mdev)) {
return true; result = true;
return false; } else {
result = false;
break;
}
}
return result;
} }
static int static int
@ -2512,6 +2523,7 @@ static int esw_set_master_egress_rule(struct mlx5_core_dev *master,
struct mlx5_eswitch *esw = master->priv.eswitch; struct mlx5_eswitch *esw = master->priv.eswitch;
struct mlx5_flow_table_attr ft_attr = { struct mlx5_flow_table_attr ft_attr = {
.max_fte = 1, .prio = 0, .level = 0, .max_fte = 1, .prio = 0, .level = 0,
.flags = MLX5_FLOW_TABLE_OTHER_VPORT,
}; };
struct mlx5_flow_namespace *egress_ns; struct mlx5_flow_namespace *egress_ns;
struct mlx5_flow_table *acl; struct mlx5_flow_table *acl;

View file

@ -835,6 +835,9 @@ void mlx5_start_health_poll(struct mlx5_core_dev *dev)
health->timer.expires = jiffies + msecs_to_jiffies(poll_interval_ms); health->timer.expires = jiffies + msecs_to_jiffies(poll_interval_ms);
add_timer(&health->timer); add_timer(&health->timer);
if (mlx5_core_is_pf(dev) && MLX5_CAP_MCAM_REG(dev, mrtc))
queue_delayed_work(health->wq, &health->update_fw_log_ts_work, 0);
} }
void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health) void mlx5_stop_health_poll(struct mlx5_core_dev *dev, bool disable_health)
@ -902,8 +905,6 @@ int mlx5_health_init(struct mlx5_core_dev *dev)
INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work); INIT_WORK(&health->fatal_report_work, mlx5_fw_fatal_reporter_err_work);
INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work); INIT_WORK(&health->report_work, mlx5_fw_reporter_err_work);
INIT_DELAYED_WORK(&health->update_fw_log_ts_work, mlx5_health_log_ts_update); INIT_DELAYED_WORK(&health->update_fw_log_ts_work, mlx5_health_log_ts_update);
if (mlx5_core_is_pf(dev))
queue_delayed_work(health->wq, &health->update_fw_log_ts_work, 0);
return 0; return 0;

View file

@ -608,4 +608,5 @@ void mlx5_lag_port_sel_destroy(struct mlx5_lag *ldev)
if (port_sel->tunnel) if (port_sel->tunnel)
mlx5_destroy_ttc_table(port_sel->inner.ttc); mlx5_destroy_ttc_table(port_sel->inner.ttc);
mlx5_lag_destroy_definers(ldev); mlx5_lag_destroy_definers(ldev);
memset(port_sel, 0, sizeof(*port_sel));
} }

View file

@ -31,11 +31,11 @@ static void tout_set(struct mlx5_core_dev *dev, u64 val, enum mlx5_timeouts_type
dev->timeouts->to[type] = val; dev->timeouts->to[type] = val;
} }
static void tout_set_def_val(struct mlx5_core_dev *dev) void mlx5_tout_set_def_val(struct mlx5_core_dev *dev)
{ {
int i; int i;
for (i = MLX5_TO_FW_PRE_INIT_TIMEOUT_MS; i < MAX_TIMEOUT_TYPES; i++) for (i = 0; i < MAX_TIMEOUT_TYPES; i++)
tout_set(dev, tout_def_sw_val[i], i); tout_set(dev, tout_def_sw_val[i], i);
} }
@ -45,7 +45,6 @@ int mlx5_tout_init(struct mlx5_core_dev *dev)
if (!dev->timeouts) if (!dev->timeouts)
return -ENOMEM; return -ENOMEM;
tout_set_def_val(dev);
return 0; return 0;
} }

View file

@ -34,6 +34,7 @@ int mlx5_tout_init(struct mlx5_core_dev *dev);
void mlx5_tout_cleanup(struct mlx5_core_dev *dev); void mlx5_tout_cleanup(struct mlx5_core_dev *dev);
void mlx5_tout_query_iseg(struct mlx5_core_dev *dev); void mlx5_tout_query_iseg(struct mlx5_core_dev *dev);
int mlx5_tout_query_dtor(struct mlx5_core_dev *dev); int mlx5_tout_query_dtor(struct mlx5_core_dev *dev);
void mlx5_tout_set_def_val(struct mlx5_core_dev *dev);
u64 _mlx5_tout_ms(struct mlx5_core_dev *dev, enum mlx5_timeouts_types type); u64 _mlx5_tout_ms(struct mlx5_core_dev *dev, enum mlx5_timeouts_types type);
#define mlx5_tout_ms(dev, type) _mlx5_tout_ms(dev, MLX5_TO_##type##_MS) #define mlx5_tout_ms(dev, type) _mlx5_tout_ms(dev, MLX5_TO_##type##_MS)

View file

@ -992,11 +992,7 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
if (mlx5_core_is_pf(dev)) if (mlx5_core_is_pf(dev))
pcie_print_link_status(dev->pdev); pcie_print_link_status(dev->pdev);
err = mlx5_tout_init(dev); mlx5_tout_set_def_val(dev);
if (err) {
mlx5_core_err(dev, "Failed initializing timeouts, aborting\n");
return err;
}
/* wait for firmware to accept initialization segments configurations /* wait for firmware to accept initialization segments configurations
*/ */
@ -1005,13 +1001,13 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
if (err) { if (err) {
mlx5_core_err(dev, "Firmware over %llu MS in pre-initializing state, aborting\n", mlx5_core_err(dev, "Firmware over %llu MS in pre-initializing state, aborting\n",
mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT)); mlx5_tout_ms(dev, FW_PRE_INIT_TIMEOUT));
goto err_tout_cleanup; return err;
} }
err = mlx5_cmd_init(dev); err = mlx5_cmd_init(dev);
if (err) { if (err) {
mlx5_core_err(dev, "Failed initializing command interface, aborting\n"); mlx5_core_err(dev, "Failed initializing command interface, aborting\n");
goto err_tout_cleanup; return err;
} }
mlx5_tout_query_iseg(dev); mlx5_tout_query_iseg(dev);
@ -1075,18 +1071,16 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
mlx5_set_driver_version(dev); mlx5_set_driver_version(dev);
mlx5_start_health_poll(dev);
err = mlx5_query_hca_caps(dev); err = mlx5_query_hca_caps(dev);
if (err) { if (err) {
mlx5_core_err(dev, "query hca failed\n"); mlx5_core_err(dev, "query hca failed\n");
goto stop_health; goto reclaim_boot_pages;
} }
mlx5_start_health_poll(dev);
return 0; return 0;
stop_health:
mlx5_stop_health_poll(dev, boot);
reclaim_boot_pages: reclaim_boot_pages:
mlx5_reclaim_startup_pages(dev); mlx5_reclaim_startup_pages(dev);
err_disable_hca: err_disable_hca:
@ -1094,8 +1088,6 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
err_cmd_cleanup: err_cmd_cleanup:
mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
mlx5_cmd_cleanup(dev); mlx5_cmd_cleanup(dev);
err_tout_cleanup:
mlx5_tout_cleanup(dev);
return err; return err;
} }
@ -1114,7 +1106,6 @@ static int mlx5_function_teardown(struct mlx5_core_dev *dev, bool boot)
mlx5_core_disable_hca(dev, 0); mlx5_core_disable_hca(dev, 0);
mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN); mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
mlx5_cmd_cleanup(dev); mlx5_cmd_cleanup(dev);
mlx5_tout_cleanup(dev);
return 0; return 0;
} }
@ -1476,6 +1467,12 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
mlx5_debugfs_root); mlx5_debugfs_root);
INIT_LIST_HEAD(&priv->traps); INIT_LIST_HEAD(&priv->traps);
err = mlx5_tout_init(dev);
if (err) {
mlx5_core_err(dev, "Failed initializing timeouts, aborting\n");
goto err_timeout_init;
}
err = mlx5_health_init(dev); err = mlx5_health_init(dev);
if (err) if (err)
goto err_health_init; goto err_health_init;
@ -1501,6 +1498,8 @@ int mlx5_mdev_init(struct mlx5_core_dev *dev, int profile_idx)
err_pagealloc_init: err_pagealloc_init:
mlx5_health_cleanup(dev); mlx5_health_cleanup(dev);
err_health_init: err_health_init:
mlx5_tout_cleanup(dev);
err_timeout_init:
debugfs_remove(dev->priv.dbg_root); debugfs_remove(dev->priv.dbg_root);
mutex_destroy(&priv->pgdir_mutex); mutex_destroy(&priv->pgdir_mutex);
mutex_destroy(&priv->alloc_mutex); mutex_destroy(&priv->alloc_mutex);
@ -1518,6 +1517,7 @@ void mlx5_mdev_uninit(struct mlx5_core_dev *dev)
mlx5_adev_cleanup(dev); mlx5_adev_cleanup(dev);
mlx5_pagealloc_cleanup(dev); mlx5_pagealloc_cleanup(dev);
mlx5_health_cleanup(dev); mlx5_health_cleanup(dev);
mlx5_tout_cleanup(dev);
debugfs_remove_recursive(dev->priv.dbg_root); debugfs_remove_recursive(dev->priv.dbg_root);
mutex_destroy(&priv->pgdir_mutex); mutex_destroy(&priv->pgdir_mutex);
mutex_destroy(&priv->alloc_mutex); mutex_destroy(&priv->alloc_mutex);

View file

@ -1563,8 +1563,10 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
} }
err = ocelot_setup_ptp_traps(ocelot, port, l2, l4); err = ocelot_setup_ptp_traps(ocelot, port, l2, l4);
if (err) if (err) {
mutex_unlock(&ocelot->ptp_lock);
return err; return err;
}
if (l2 && l4) if (l2 && l4)
cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT; cfg.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;

View file

@ -120,7 +120,7 @@ static const struct net_device_ops xtsonic_netdev_ops = {
.ndo_set_mac_address = eth_mac_addr, .ndo_set_mac_address = eth_mac_addr,
}; };
static int __init sonic_probe1(struct net_device *dev) static int sonic_probe1(struct net_device *dev)
{ {
unsigned int silicon_revision; unsigned int silicon_revision;
struct sonic_local *lp = netdev_priv(dev); struct sonic_local *lp = netdev_priv(dev);

View file

@ -1077,8 +1077,14 @@ static int qlcnic_83xx_add_rings(struct qlcnic_adapter *adapter)
sds_mbx_size = sizeof(struct qlcnic_sds_mbx); sds_mbx_size = sizeof(struct qlcnic_sds_mbx);
context_id = recv_ctx->context_id; context_id = recv_ctx->context_id;
num_sds = adapter->drv_sds_rings - QLCNIC_MAX_SDS_RINGS; num_sds = adapter->drv_sds_rings - QLCNIC_MAX_SDS_RINGS;
ahw->hw_ops->alloc_mbx_args(&cmd, adapter, err = ahw->hw_ops->alloc_mbx_args(&cmd, adapter,
QLCNIC_CMD_ADD_RCV_RINGS); QLCNIC_CMD_ADD_RCV_RINGS);
if (err) {
dev_err(&adapter->pdev->dev,
"Failed to alloc mbx args %d\n", err);
return err;
}
cmd.req.arg[1] = 0 | (num_sds << 8) | (context_id << 16); cmd.req.arg[1] = 0 | (num_sds << 8) | (context_id << 16);
/* set up status rings, mbx 2-81 */ /* set up status rings, mbx 2-81 */

View file

@ -5540,8 +5540,6 @@ static int stmmac_set_features(struct net_device *netdev,
netdev_features_t features) netdev_features_t features)
{ {
struct stmmac_priv *priv = netdev_priv(netdev); struct stmmac_priv *priv = netdev_priv(netdev);
bool sph_en;
u32 chan;
/* Keep the COE Type in case of csum is supporting */ /* Keep the COE Type in case of csum is supporting */
if (features & NETIF_F_RXCSUM) if (features & NETIF_F_RXCSUM)
@ -5553,10 +5551,13 @@ static int stmmac_set_features(struct net_device *netdev,
*/ */
stmmac_rx_ipc(priv, priv->hw); stmmac_rx_ipc(priv, priv->hw);
sph_en = (priv->hw->rx_csum > 0) && priv->sph; if (priv->sph_cap) {
bool sph_en = (priv->hw->rx_csum > 0) && priv->sph;
u32 chan;
for (chan = 0; chan < priv->plat->rx_queues_to_use; chan++) for (chan = 0; chan < priv->plat->rx_queues_to_use; chan++)
stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan); stmmac_enable_sph(priv, priv->ioaddr, sph_en, chan);
}
return 0; return 0;
} }

View file

@ -2228,7 +2228,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
if (dev->domain_data.phyirq > 0) if (dev->domain_data.phyirq > 0)
phydev->irq = dev->domain_data.phyirq; phydev->irq = dev->domain_data.phyirq;
else else
phydev->irq = 0; phydev->irq = PHY_POLL;
netdev_dbg(dev->net, "phydev->irq = %d\n", phydev->irq); netdev_dbg(dev->net, "phydev->irq = %d\n", phydev->irq);
/* set to AUTOMDIX */ /* set to AUTOMDIX */

View file

@ -497,6 +497,7 @@ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
/* strip the ethernet header added for pass through VRF device */ /* strip the ethernet header added for pass through VRF device */
__skb_pull(skb, skb_network_offset(skb)); __skb_pull(skb, skb_network_offset(skb));
memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
ret = vrf_ip6_local_out(net, skb->sk, skb); ret = vrf_ip6_local_out(net, skb->sk, skb);
if (unlikely(net_xmit_eval(ret))) if (unlikely(net_xmit_eval(ret)))
dev->stats.tx_errors++; dev->stats.tx_errors++;
@ -579,6 +580,7 @@ static netdev_tx_t vrf_process_v4_outbound(struct sk_buff *skb,
RT_SCOPE_LINK); RT_SCOPE_LINK);
} }
memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
ret = vrf_ip_local_out(dev_net(skb_dst(skb)->dev), skb->sk, skb); ret = vrf_ip_local_out(dev_net(skb_dst(skb)->dev), skb->sk, skb);
if (unlikely(net_xmit_eval(ret))) if (unlikely(net_xmit_eval(ret)))
vrf_dev->stats.tx_errors++; vrf_dev->stats.tx_errors++;

View file

@ -163,7 +163,7 @@ static bool node_placement(struct allowedips_node __rcu *trie, const u8 *key,
return exact; return exact;
} }
static inline void connect_node(struct allowedips_node **parent, u8 bit, struct allowedips_node *node) static inline void connect_node(struct allowedips_node __rcu **parent, u8 bit, struct allowedips_node *node)
{ {
node->parent_bit_packed = (unsigned long)parent | bit; node->parent_bit_packed = (unsigned long)parent | bit;
rcu_assign_pointer(*parent, node); rcu_assign_pointer(*parent, node);

View file

@ -98,6 +98,7 @@ static int wg_stop(struct net_device *dev)
{ {
struct wg_device *wg = netdev_priv(dev); struct wg_device *wg = netdev_priv(dev);
struct wg_peer *peer; struct wg_peer *peer;
struct sk_buff *skb;
mutex_lock(&wg->device_update_lock); mutex_lock(&wg->device_update_lock);
list_for_each_entry(peer, &wg->peer_list, peer_list) { list_for_each_entry(peer, &wg->peer_list, peer_list) {
@ -108,7 +109,9 @@ static int wg_stop(struct net_device *dev)
wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake); wg_noise_reset_last_sent_handshake(&peer->last_sent_handshake);
} }
mutex_unlock(&wg->device_update_lock); mutex_unlock(&wg->device_update_lock);
skb_queue_purge(&wg->incoming_handshakes); while ((skb = ptr_ring_consume(&wg->handshake_queue.ring)) != NULL)
kfree_skb(skb);
atomic_set(&wg->handshake_queue_len, 0);
wg_socket_reinit(wg, NULL, NULL); wg_socket_reinit(wg, NULL, NULL);
return 0; return 0;
} }
@ -235,14 +238,13 @@ static void wg_destruct(struct net_device *dev)
destroy_workqueue(wg->handshake_receive_wq); destroy_workqueue(wg->handshake_receive_wq);
destroy_workqueue(wg->handshake_send_wq); destroy_workqueue(wg->handshake_send_wq);
destroy_workqueue(wg->packet_crypt_wq); destroy_workqueue(wg->packet_crypt_wq);
wg_packet_queue_free(&wg->decrypt_queue); wg_packet_queue_free(&wg->handshake_queue, true);
wg_packet_queue_free(&wg->encrypt_queue); wg_packet_queue_free(&wg->decrypt_queue, false);
wg_packet_queue_free(&wg->encrypt_queue, false);
rcu_barrier(); /* Wait for all the peers to be actually freed. */ rcu_barrier(); /* Wait for all the peers to be actually freed. */
wg_ratelimiter_uninit(); wg_ratelimiter_uninit();
memzero_explicit(&wg->static_identity, sizeof(wg->static_identity)); memzero_explicit(&wg->static_identity, sizeof(wg->static_identity));
skb_queue_purge(&wg->incoming_handshakes);
free_percpu(dev->tstats); free_percpu(dev->tstats);
free_percpu(wg->incoming_handshakes_worker);
kvfree(wg->index_hashtable); kvfree(wg->index_hashtable);
kvfree(wg->peer_hashtable); kvfree(wg->peer_hashtable);
mutex_unlock(&wg->device_update_lock); mutex_unlock(&wg->device_update_lock);
@ -298,7 +300,6 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
init_rwsem(&wg->static_identity.lock); init_rwsem(&wg->static_identity.lock);
mutex_init(&wg->socket_update_lock); mutex_init(&wg->socket_update_lock);
mutex_init(&wg->device_update_lock); mutex_init(&wg->device_update_lock);
skb_queue_head_init(&wg->incoming_handshakes);
wg_allowedips_init(&wg->peer_allowedips); wg_allowedips_init(&wg->peer_allowedips);
wg_cookie_checker_init(&wg->cookie_checker, wg); wg_cookie_checker_init(&wg->cookie_checker, wg);
INIT_LIST_HEAD(&wg->peer_list); INIT_LIST_HEAD(&wg->peer_list);
@ -316,16 +317,10 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
if (!dev->tstats) if (!dev->tstats)
goto err_free_index_hashtable; goto err_free_index_hashtable;
wg->incoming_handshakes_worker =
wg_packet_percpu_multicore_worker_alloc(
wg_packet_handshake_receive_worker, wg);
if (!wg->incoming_handshakes_worker)
goto err_free_tstats;
wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s", wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s",
WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name); WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name);
if (!wg->handshake_receive_wq) if (!wg->handshake_receive_wq)
goto err_free_incoming_handshakes; goto err_free_tstats;
wg->handshake_send_wq = alloc_workqueue("wg-kex-%s", wg->handshake_send_wq = alloc_workqueue("wg-kex-%s",
WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name); WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name);
@ -347,10 +342,15 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
if (ret < 0) if (ret < 0)
goto err_free_encrypt_queue; goto err_free_encrypt_queue;
ret = wg_ratelimiter_init(); ret = wg_packet_queue_init(&wg->handshake_queue, wg_packet_handshake_receive_worker,
MAX_QUEUED_INCOMING_HANDSHAKES);
if (ret < 0) if (ret < 0)
goto err_free_decrypt_queue; goto err_free_decrypt_queue;
ret = wg_ratelimiter_init();
if (ret < 0)
goto err_free_handshake_queue;
ret = register_netdevice(dev); ret = register_netdevice(dev);
if (ret < 0) if (ret < 0)
goto err_uninit_ratelimiter; goto err_uninit_ratelimiter;
@ -367,18 +367,18 @@ static int wg_newlink(struct net *src_net, struct net_device *dev,
err_uninit_ratelimiter: err_uninit_ratelimiter:
wg_ratelimiter_uninit(); wg_ratelimiter_uninit();
err_free_handshake_queue:
wg_packet_queue_free(&wg->handshake_queue, false);
err_free_decrypt_queue: err_free_decrypt_queue:
wg_packet_queue_free(&wg->decrypt_queue); wg_packet_queue_free(&wg->decrypt_queue, false);
err_free_encrypt_queue: err_free_encrypt_queue:
wg_packet_queue_free(&wg->encrypt_queue); wg_packet_queue_free(&wg->encrypt_queue, false);
err_destroy_packet_crypt: err_destroy_packet_crypt:
destroy_workqueue(wg->packet_crypt_wq); destroy_workqueue(wg->packet_crypt_wq);
err_destroy_handshake_send: err_destroy_handshake_send:
destroy_workqueue(wg->handshake_send_wq); destroy_workqueue(wg->handshake_send_wq);
err_destroy_handshake_receive: err_destroy_handshake_receive:
destroy_workqueue(wg->handshake_receive_wq); destroy_workqueue(wg->handshake_receive_wq);
err_free_incoming_handshakes:
free_percpu(wg->incoming_handshakes_worker);
err_free_tstats: err_free_tstats:
free_percpu(dev->tstats); free_percpu(dev->tstats);
err_free_index_hashtable: err_free_index_hashtable:
@ -398,6 +398,7 @@ static struct rtnl_link_ops link_ops __read_mostly = {
static void wg_netns_pre_exit(struct net *net) static void wg_netns_pre_exit(struct net *net)
{ {
struct wg_device *wg; struct wg_device *wg;
struct wg_peer *peer;
rtnl_lock(); rtnl_lock();
list_for_each_entry(wg, &device_list, device_list) { list_for_each_entry(wg, &device_list, device_list) {
@ -407,6 +408,8 @@ static void wg_netns_pre_exit(struct net *net)
mutex_lock(&wg->device_update_lock); mutex_lock(&wg->device_update_lock);
rcu_assign_pointer(wg->creating_net, NULL); rcu_assign_pointer(wg->creating_net, NULL);
wg_socket_reinit(wg, NULL, NULL); wg_socket_reinit(wg, NULL, NULL);
list_for_each_entry(peer, &wg->peer_list, peer_list)
wg_socket_clear_peer_endpoint_src(peer);
mutex_unlock(&wg->device_update_lock); mutex_unlock(&wg->device_update_lock);
} }
} }

View file

@ -39,21 +39,18 @@ struct prev_queue {
struct wg_device { struct wg_device {
struct net_device *dev; struct net_device *dev;
struct crypt_queue encrypt_queue, decrypt_queue; struct crypt_queue encrypt_queue, decrypt_queue, handshake_queue;
struct sock __rcu *sock4, *sock6; struct sock __rcu *sock4, *sock6;
struct net __rcu *creating_net; struct net __rcu *creating_net;
struct noise_static_identity static_identity; struct noise_static_identity static_identity;
struct workqueue_struct *handshake_receive_wq, *handshake_send_wq; struct workqueue_struct *packet_crypt_wq,*handshake_receive_wq, *handshake_send_wq;
struct workqueue_struct *packet_crypt_wq;
struct sk_buff_head incoming_handshakes;
int incoming_handshake_cpu;
struct multicore_worker __percpu *incoming_handshakes_worker;
struct cookie_checker cookie_checker; struct cookie_checker cookie_checker;
struct pubkey_hashtable *peer_hashtable; struct pubkey_hashtable *peer_hashtable;
struct index_hashtable *index_hashtable; struct index_hashtable *index_hashtable;
struct allowedips peer_allowedips; struct allowedips peer_allowedips;
struct mutex device_update_lock, socket_update_lock; struct mutex device_update_lock, socket_update_lock;
struct list_head device_list, peer_list; struct list_head device_list, peer_list;
atomic_t handshake_queue_len;
unsigned int num_peers, device_update_gen; unsigned int num_peers, device_update_gen;
u32 fwmark; u32 fwmark;
u16 incoming_port; u16 incoming_port;

View file

@ -17,7 +17,7 @@
#include <linux/genetlink.h> #include <linux/genetlink.h>
#include <net/rtnetlink.h> #include <net/rtnetlink.h>
static int __init mod_init(void) static int __init wg_mod_init(void)
{ {
int ret; int ret;
@ -60,7 +60,7 @@ static int __init mod_init(void)
return ret; return ret;
} }
static void __exit mod_exit(void) static void __exit wg_mod_exit(void)
{ {
wg_genetlink_uninit(); wg_genetlink_uninit();
wg_device_uninit(); wg_device_uninit();
@ -68,8 +68,8 @@ static void __exit mod_exit(void)
wg_allowedips_slab_uninit(); wg_allowedips_slab_uninit();
} }
module_init(mod_init); module_init(wg_mod_init);
module_exit(mod_exit); module_exit(wg_mod_exit);
MODULE_LICENSE("GPL v2"); MODULE_LICENSE("GPL v2");
MODULE_DESCRIPTION("WireGuard secure network tunnel"); MODULE_DESCRIPTION("WireGuard secure network tunnel");
MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>"); MODULE_AUTHOR("Jason A. Donenfeld <Jason@zx2c4.com>");

View file

@ -38,11 +38,11 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
return 0; return 0;
} }
void wg_packet_queue_free(struct crypt_queue *queue) void wg_packet_queue_free(struct crypt_queue *queue, bool purge)
{ {
free_percpu(queue->worker); free_percpu(queue->worker);
WARN_ON(!__ptr_ring_empty(&queue->ring)); WARN_ON(!purge && !__ptr_ring_empty(&queue->ring));
ptr_ring_cleanup(&queue->ring, NULL); ptr_ring_cleanup(&queue->ring, purge ? (void(*)(void*))kfree_skb : NULL);
} }
#define NEXT(skb) ((skb)->prev) #define NEXT(skb) ((skb)->prev)

View file

@ -23,7 +23,7 @@ struct sk_buff;
/* queueing.c APIs: */ /* queueing.c APIs: */
int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function, int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
unsigned int len); unsigned int len);
void wg_packet_queue_free(struct crypt_queue *queue); void wg_packet_queue_free(struct crypt_queue *queue, bool purge);
struct multicore_worker __percpu * struct multicore_worker __percpu *
wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr); wg_packet_percpu_multicore_worker_alloc(work_func_t function, void *ptr);

View file

@ -176,12 +176,12 @@ int wg_ratelimiter_init(void)
(1U << 14) / sizeof(struct hlist_head))); (1U << 14) / sizeof(struct hlist_head)));
max_entries = table_size * 8; max_entries = table_size * 8;
table_v4 = kvzalloc(table_size * sizeof(*table_v4), GFP_KERNEL); table_v4 = kvcalloc(table_size, sizeof(*table_v4), GFP_KERNEL);
if (unlikely(!table_v4)) if (unlikely(!table_v4))
goto err_kmemcache; goto err_kmemcache;
#if IS_ENABLED(CONFIG_IPV6) #if IS_ENABLED(CONFIG_IPV6)
table_v6 = kvzalloc(table_size * sizeof(*table_v6), GFP_KERNEL); table_v6 = kvcalloc(table_size, sizeof(*table_v6), GFP_KERNEL);
if (unlikely(!table_v6)) { if (unlikely(!table_v6)) {
kvfree(table_v4); kvfree(table_v4);
goto err_kmemcache; goto err_kmemcache;

View file

@ -116,8 +116,8 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
return; return;
} }
under_load = skb_queue_len(&wg->incoming_handshakes) >= under_load = atomic_read(&wg->handshake_queue_len) >=
MAX_QUEUED_INCOMING_HANDSHAKES / 8; MAX_QUEUED_INCOMING_HANDSHAKES / 8;
if (under_load) { if (under_load) {
last_under_load = ktime_get_coarse_boottime_ns(); last_under_load = ktime_get_coarse_boottime_ns();
} else if (last_under_load) { } else if (last_under_load) {
@ -212,13 +212,14 @@ static void wg_receive_handshake_packet(struct wg_device *wg,
void wg_packet_handshake_receive_worker(struct work_struct *work) void wg_packet_handshake_receive_worker(struct work_struct *work)
{ {
struct wg_device *wg = container_of(work, struct multicore_worker, struct crypt_queue *queue = container_of(work, struct multicore_worker, work)->ptr;
work)->ptr; struct wg_device *wg = container_of(queue, struct wg_device, handshake_queue);
struct sk_buff *skb; struct sk_buff *skb;
while ((skb = skb_dequeue(&wg->incoming_handshakes)) != NULL) { while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
wg_receive_handshake_packet(wg, skb); wg_receive_handshake_packet(wg, skb);
dev_kfree_skb(skb); dev_kfree_skb(skb);
atomic_dec(&wg->handshake_queue_len);
cond_resched(); cond_resched();
} }
} }
@ -553,22 +554,28 @@ void wg_packet_receive(struct wg_device *wg, struct sk_buff *skb)
case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION): case cpu_to_le32(MESSAGE_HANDSHAKE_INITIATION):
case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE): case cpu_to_le32(MESSAGE_HANDSHAKE_RESPONSE):
case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): { case cpu_to_le32(MESSAGE_HANDSHAKE_COOKIE): {
int cpu; int cpu, ret = -EBUSY;
if (skb_queue_len(&wg->incoming_handshakes) > if (unlikely(!rng_is_initialized()))
MAX_QUEUED_INCOMING_HANDSHAKES || goto drop;
unlikely(!rng_is_initialized())) { if (atomic_read(&wg->handshake_queue_len) > MAX_QUEUED_INCOMING_HANDSHAKES / 2) {
if (spin_trylock_bh(&wg->handshake_queue.ring.producer_lock)) {
ret = __ptr_ring_produce(&wg->handshake_queue.ring, skb);
spin_unlock_bh(&wg->handshake_queue.ring.producer_lock);
}
} else
ret = ptr_ring_produce_bh(&wg->handshake_queue.ring, skb);
if (ret) {
drop:
net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n", net_dbg_skb_ratelimited("%s: Dropping handshake packet from %pISpfsc\n",
wg->dev->name, skb); wg->dev->name, skb);
goto err; goto err;
} }
skb_queue_tail(&wg->incoming_handshakes, skb); atomic_inc(&wg->handshake_queue_len);
/* Queues up a call to packet_process_queued_handshake_ cpu = wg_cpumask_next_online(&wg->handshake_queue.last_cpu);
* packets(skb): /* Queues up a call to packet_process_queued_handshake_packets(skb): */
*/
cpu = wg_cpumask_next_online(&wg->incoming_handshake_cpu);
queue_work_on(cpu, wg->handshake_receive_wq, queue_work_on(cpu, wg->handshake_receive_wq,
&per_cpu_ptr(wg->incoming_handshakes_worker, cpu)->work); &per_cpu_ptr(wg->handshake_queue.worker, cpu)->work);
break; break;
} }
case cpu_to_le32(MESSAGE_DATA): case cpu_to_le32(MESSAGE_DATA):

View file

@ -308,7 +308,7 @@ void wg_socket_clear_peer_endpoint_src(struct wg_peer *peer)
{ {
write_lock_bh(&peer->endpoint_lock); write_lock_bh(&peer->endpoint_lock);
memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6)); memset(&peer->endpoint.src6, 0, sizeof(peer->endpoint.src6));
dst_cache_reset(&peer->endpoint_cache); dst_cache_reset_now(&peer->endpoint_cache);
write_unlock_bh(&peer->endpoint_lock); write_unlock_bh(&peer->endpoint_lock);
} }

View file

@ -86,6 +86,7 @@ static void *iwl_uefi_reduce_power_section(struct iwl_trans *trans,
if (len < tlv_len) { if (len < tlv_len) {
IWL_ERR(trans, "invalid TLV len: %zd/%u\n", IWL_ERR(trans, "invalid TLV len: %zd/%u\n",
len, tlv_len); len, tlv_len);
kfree(reduce_power_data);
reduce_power_data = ERR_PTR(-EINVAL); reduce_power_data = ERR_PTR(-EINVAL);
goto out; goto out;
} }
@ -105,6 +106,7 @@ static void *iwl_uefi_reduce_power_section(struct iwl_trans *trans,
IWL_DEBUG_FW(trans, IWL_DEBUG_FW(trans,
"Couldn't allocate (more) reduce_power_data\n"); "Couldn't allocate (more) reduce_power_data\n");
kfree(reduce_power_data);
reduce_power_data = ERR_PTR(-ENOMEM); reduce_power_data = ERR_PTR(-ENOMEM);
goto out; goto out;
} }
@ -134,6 +136,10 @@ static void *iwl_uefi_reduce_power_section(struct iwl_trans *trans,
done: done:
if (!size) { if (!size) {
IWL_DEBUG_FW(trans, "Empty REDUCE_POWER, skipping.\n"); IWL_DEBUG_FW(trans, "Empty REDUCE_POWER, skipping.\n");
/* Better safe than sorry, but 'reduce_power_data' should
* always be NULL if !size.
*/
kfree(reduce_power_data);
reduce_power_data = ERR_PTR(-ENOENT); reduce_power_data = ERR_PTR(-ENOENT);
goto out; goto out;
} }

View file

@ -1313,23 +1313,31 @@ _iwl_op_mode_start(struct iwl_drv *drv, struct iwlwifi_opmode_table *op)
const struct iwl_op_mode_ops *ops = op->ops; const struct iwl_op_mode_ops *ops = op->ops;
struct dentry *dbgfs_dir = NULL; struct dentry *dbgfs_dir = NULL;
struct iwl_op_mode *op_mode = NULL; struct iwl_op_mode *op_mode = NULL;
int retry, max_retry = !!iwlwifi_mod_params.fw_restart * IWL_MAX_INIT_RETRY;
for (retry = 0; retry <= max_retry; retry++) {
#ifdef CONFIG_IWLWIFI_DEBUGFS #ifdef CONFIG_IWLWIFI_DEBUGFS
drv->dbgfs_op_mode = debugfs_create_dir(op->name, drv->dbgfs_op_mode = debugfs_create_dir(op->name,
drv->dbgfs_drv); drv->dbgfs_drv);
dbgfs_dir = drv->dbgfs_op_mode; dbgfs_dir = drv->dbgfs_op_mode;
#endif #endif
op_mode = ops->start(drv->trans, drv->trans->cfg, &drv->fw, dbgfs_dir); op_mode = ops->start(drv->trans, drv->trans->cfg,
&drv->fw, dbgfs_dir);
if (op_mode)
return op_mode;
IWL_ERR(drv, "retry init count %d\n", retry);
#ifdef CONFIG_IWLWIFI_DEBUGFS #ifdef CONFIG_IWLWIFI_DEBUGFS
if (!op_mode) {
debugfs_remove_recursive(drv->dbgfs_op_mode); debugfs_remove_recursive(drv->dbgfs_op_mode);
drv->dbgfs_op_mode = NULL; drv->dbgfs_op_mode = NULL;
}
#endif #endif
}
return op_mode; return NULL;
} }
static void _iwl_op_mode_stop(struct iwl_drv *drv) static void _iwl_op_mode_stop(struct iwl_drv *drv)

View file

@ -89,4 +89,7 @@ void iwl_drv_stop(struct iwl_drv *drv);
#define IWL_EXPORT_SYMBOL(sym) #define IWL_EXPORT_SYMBOL(sym)
#endif #endif
/* max retry for init flow */
#define IWL_MAX_INIT_RETRY 2
#endif /* __iwl_drv_h__ */ #endif /* __iwl_drv_h__ */

View file

@ -16,6 +16,7 @@
#include <net/ieee80211_radiotap.h> #include <net/ieee80211_radiotap.h>
#include <net/tcp.h> #include <net/tcp.h>
#include "iwl-drv.h"
#include "iwl-op-mode.h" #include "iwl-op-mode.h"
#include "iwl-io.h" #include "iwl-io.h"
#include "mvm.h" #include "mvm.h"
@ -1117,9 +1118,30 @@ static int iwl_mvm_mac_start(struct ieee80211_hw *hw)
{ {
struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw); struct iwl_mvm *mvm = IWL_MAC80211_GET_MVM(hw);
int ret; int ret;
int retry, max_retry = 0;
mutex_lock(&mvm->mutex); mutex_lock(&mvm->mutex);
ret = __iwl_mvm_mac_start(mvm);
/* we are starting the mac not in error flow, and restart is enabled */
if (!test_bit(IWL_MVM_STATUS_HW_RESTART_REQUESTED, &mvm->status) &&
iwlwifi_mod_params.fw_restart) {
max_retry = IWL_MAX_INIT_RETRY;
/*
* This will prevent mac80211 recovery flows to trigger during
* init failures
*/
set_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
}
for (retry = 0; retry <= max_retry; retry++) {
ret = __iwl_mvm_mac_start(mvm);
if (!ret)
break;
IWL_ERR(mvm, "mac start retry %d\n", retry);
}
clear_bit(IWL_MVM_STATUS_STARTING, &mvm->status);
mutex_unlock(&mvm->mutex); mutex_unlock(&mvm->mutex);
return ret; return ret;

View file

@ -1123,6 +1123,8 @@ struct iwl_mvm {
* @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running * @IWL_MVM_STATUS_FIRMWARE_RUNNING: firmware is running
* @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA * @IWL_MVM_STATUS_NEED_FLUSH_P2P: need to flush P2P bcast STA
* @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it) * @IWL_MVM_STATUS_IN_D3: in D3 (or at least about to go into it)
* @IWL_MVM_STATUS_STARTING: starting mac,
* used to disable restart flow while in STARTING state
*/ */
enum iwl_mvm_status { enum iwl_mvm_status {
IWL_MVM_STATUS_HW_RFKILL, IWL_MVM_STATUS_HW_RFKILL,
@ -1134,6 +1136,7 @@ enum iwl_mvm_status {
IWL_MVM_STATUS_FIRMWARE_RUNNING, IWL_MVM_STATUS_FIRMWARE_RUNNING,
IWL_MVM_STATUS_NEED_FLUSH_P2P, IWL_MVM_STATUS_NEED_FLUSH_P2P,
IWL_MVM_STATUS_IN_D3, IWL_MVM_STATUS_IN_D3,
IWL_MVM_STATUS_STARTING,
}; };
/* Keep track of completed init configuration */ /* Keep track of completed init configuration */

View file

@ -686,6 +686,7 @@ static int iwl_mvm_start_get_nvm(struct iwl_mvm *mvm)
int ret; int ret;
rtnl_lock(); rtnl_lock();
wiphy_lock(mvm->hw->wiphy);
mutex_lock(&mvm->mutex); mutex_lock(&mvm->mutex);
ret = iwl_run_init_mvm_ucode(mvm); ret = iwl_run_init_mvm_ucode(mvm);
@ -701,6 +702,7 @@ static int iwl_mvm_start_get_nvm(struct iwl_mvm *mvm)
iwl_mvm_stop_device(mvm); iwl_mvm_stop_device(mvm);
mutex_unlock(&mvm->mutex); mutex_unlock(&mvm->mutex);
wiphy_unlock(mvm->hw->wiphy);
rtnl_unlock(); rtnl_unlock();
if (ret < 0) if (ret < 0)
@ -1600,6 +1602,9 @@ void iwl_mvm_nic_restart(struct iwl_mvm *mvm, bool fw_error)
*/ */
if (!mvm->fw_restart && fw_error) { if (!mvm->fw_restart && fw_error) {
iwl_fw_error_collect(&mvm->fwrt, false); iwl_fw_error_collect(&mvm->fwrt, false);
} else if (test_bit(IWL_MVM_STATUS_STARTING,
&mvm->status)) {
IWL_ERR(mvm, "Starting mac, retry will be triggered anyway\n");
} else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) { } else if (test_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status)) {
struct iwl_mvm_reprobe *reprobe; struct iwl_mvm_reprobe *reprobe;

View file

@ -1339,9 +1339,13 @@ iwl_pci_find_dev_info(u16 device, u16 subsystem_device,
u16 mac_type, u8 mac_step, u16 mac_type, u8 mac_step,
u16 rf_type, u8 cdb, u8 rf_id, u8 no_160, u8 cores) u16 rf_type, u8 cdb, u8 rf_id, u8 no_160, u8 cores)
{ {
int num_devices = ARRAY_SIZE(iwl_dev_info_table);
int i; int i;
for (i = ARRAY_SIZE(iwl_dev_info_table) - 1; i >= 0; i--) { if (!num_devices)
return NULL;
for (i = num_devices - 1; i >= 0; i--) {
const struct iwl_dev_info *dev_info = &iwl_dev_info_table[i]; const struct iwl_dev_info *dev_info = &iwl_dev_info_table[i];
if (dev_info->device != (u16)IWL_CFG_ANY && if (dev_info->device != (u16)IWL_CFG_ANY &&
@ -1442,8 +1446,10 @@ static int iwl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/ */
if (iwl_trans->trans_cfg->rf_id && if (iwl_trans->trans_cfg->rf_id &&
iwl_trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_9000 && iwl_trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_9000 &&
!CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) !CSR_HW_RFID_TYPE(iwl_trans->hw_rf_id) && get_crf_id(iwl_trans)) {
ret = -EINVAL;
goto out_free_trans; goto out_free_trans;
}
dev_info = iwl_pci_find_dev_info(pdev->device, pdev->subsystem_device, dev_info = iwl_pci_find_dev_info(pdev->device, pdev->subsystem_device,
CSR_HW_REV_TYPE(iwl_trans->hw_rev), CSR_HW_REV_TYPE(iwl_trans->hw_rev),

View file

@ -143,8 +143,6 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (!wcid) if (!wcid)
wcid = &dev->mt76.global_wcid; wcid = &dev->mt76.global_wcid;
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) { if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && msta) {
struct mt7615_phy *phy = &dev->phy; struct mt7615_phy *phy = &dev->phy;
@ -164,6 +162,7 @@ int mt7615_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
if (id < 0) if (id < 0)
return id; return id;
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
mt7615_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, sta, mt7615_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, sta,
pid, key, false); pid, key, false);

View file

@ -43,19 +43,11 @@ EXPORT_SYMBOL_GPL(mt7663_usb_sdio_reg_map);
static void static void
mt7663_usb_sdio_write_txwi(struct mt7615_dev *dev, struct mt76_wcid *wcid, mt7663_usb_sdio_write_txwi(struct mt7615_dev *dev, struct mt76_wcid *wcid,
enum mt76_txq_id qid, struct ieee80211_sta *sta, enum mt76_txq_id qid, struct ieee80211_sta *sta,
struct ieee80211_key_conf *key, int pid,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); __le32 *txwi = (__le32 *)(skb->data - MT_USB_TXD_SIZE);
struct ieee80211_key_conf *key = info->control.hw_key;
__le32 *txwi;
int pid;
if (!wcid)
wcid = &dev->mt76.global_wcid;
pid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb);
txwi = (__le32 *)(skb->data - MT_USB_TXD_SIZE);
memset(txwi, 0, MT_USB_TXD_SIZE); memset(txwi, 0, MT_USB_TXD_SIZE);
mt7615_mac_write_txwi(dev, txwi, skb, wcid, sta, pid, key, false); mt7615_mac_write_txwi(dev, txwi, skb, wcid, sta, pid, key, false);
skb_push(skb, MT_USB_TXD_SIZE); skb_push(skb, MT_USB_TXD_SIZE);
@ -194,10 +186,14 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76); struct mt7615_dev *dev = container_of(mdev, struct mt7615_dev, mt76);
struct sk_buff *skb = tx_info->skb; struct sk_buff *skb = tx_info->skb;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_key_conf *key = info->control.hw_key;
struct mt7615_sta *msta; struct mt7615_sta *msta;
int pad; int pad, err, pktid;
msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL; msta = wcid ? container_of(wcid, struct mt7615_sta, wcid) : NULL;
if (!wcid)
wcid = &dev->mt76.global_wcid;
if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) && if ((info->flags & IEEE80211_TX_CTL_RATE_CTRL_PROBE) &&
msta && !msta->rate_probe) { msta && !msta->rate_probe) {
/* request to configure sampling rate */ /* request to configure sampling rate */
@ -207,7 +203,8 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
spin_unlock_bh(&dev->mt76.lock); spin_unlock_bh(&dev->mt76.lock);
} }
mt7663_usb_sdio_write_txwi(dev, wcid, qid, sta, skb); pktid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb);
mt7663_usb_sdio_write_txwi(dev, wcid, qid, sta, key, pktid, skb);
if (mt76_is_usb(mdev)) { if (mt76_is_usb(mdev)) {
u32 len = skb->len; u32 len = skb->len;
@ -217,7 +214,12 @@ int mt7663_usb_sdio_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
pad = round_up(skb->len, 4) - skb->len; pad = round_up(skb->len, 4) - skb->len;
} }
return mt76_skb_adjust_pad(skb, pad); err = mt76_skb_adjust_pad(skb, pad);
if (err)
/* Release pktid in case of error. */
idr_remove(&wcid->pktid, pktid);
return err;
} }
EXPORT_SYMBOL_GPL(mt7663_usb_sdio_tx_prepare_skb); EXPORT_SYMBOL_GPL(mt7663_usb_sdio_tx_prepare_skb);

View file

@ -72,6 +72,7 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data,
bool ampdu = IEEE80211_SKB_CB(tx_info->skb)->flags & IEEE80211_TX_CTL_AMPDU; bool ampdu = IEEE80211_SKB_CB(tx_info->skb)->flags & IEEE80211_TX_CTL_AMPDU;
enum mt76_qsel qsel; enum mt76_qsel qsel;
u32 flags; u32 flags;
int err;
mt76_insert_hdr_pad(tx_info->skb); mt76_insert_hdr_pad(tx_info->skb);
@ -106,7 +107,12 @@ int mt76x02u_tx_prepare_skb(struct mt76_dev *mdev, void *data,
ewma_pktlen_add(&msta->pktlen, tx_info->skb->len); ewma_pktlen_add(&msta->pktlen, tx_info->skb->len);
} }
return mt76x02u_skb_dma_info(tx_info->skb, WLAN_PORT, flags); err = mt76x02u_skb_dma_info(tx_info->skb, WLAN_PORT, flags);
if (err && wcid)
/* Release pktid in case of error. */
idr_remove(&wcid->pktid, pid);
return err;
} }
EXPORT_SYMBOL_GPL(mt76x02u_tx_prepare_skb); EXPORT_SYMBOL_GPL(mt76x02u_tx_prepare_skb);

View file

@ -1151,8 +1151,14 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
} }
} }
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb); t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size);
t->skb = tx_info->skb;
id = mt76_token_consume(mdev, &t);
if (id < 0)
return id;
pid = mt76_tx_status_skb_add(mdev, wcid, tx_info->skb);
mt7915_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid, key, mt7915_mac_write_txwi(dev, txwi_ptr, tx_info->skb, wcid, pid, key,
false); false);
@ -1178,13 +1184,6 @@ int mt7915_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
txp->bss_idx = mvif->idx; txp->bss_idx = mvif->idx;
} }
t = (struct mt76_txwi_cache *)(txwi + mdev->drv->txwi_size);
t->skb = tx_info->skb;
id = mt76_token_consume(mdev, &t);
if (id < 0)
return id;
txp->token = cpu_to_le16(id); txp->token = cpu_to_le16(id);
if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags)) if (test_bit(MT_WCID_FLAG_4ADDR, &wcid->flags))
txp->rept_wds_wcid = cpu_to_le16(wcid->idx); txp->rept_wds_wcid = cpu_to_le16(wcid->idx);

View file

@ -176,7 +176,7 @@ mt7915_get_phy_mode(struct ieee80211_vif *vif, struct ieee80211_sta *sta)
if (ht_cap->ht_supported) if (ht_cap->ht_supported)
mode |= PHY_MODE_GN; mode |= PHY_MODE_GN;
if (he_cap->has_he) if (he_cap && he_cap->has_he)
mode |= PHY_MODE_AX_24G; mode |= PHY_MODE_AX_24G;
} else if (band == NL80211_BAND_5GHZ) { } else if (band == NL80211_BAND_5GHZ) {
mode |= PHY_MODE_A; mode |= PHY_MODE_A;
@ -187,7 +187,7 @@ mt7915_get_phy_mode(struct ieee80211_vif *vif, struct ieee80211_sta *sta)
if (vht_cap->vht_supported) if (vht_cap->vht_supported)
mode |= PHY_MODE_AC; mode |= PHY_MODE_AC;
if (he_cap->has_he) if (he_cap && he_cap->has_he)
mode |= PHY_MODE_AX_5G; mode |= PHY_MODE_AX_5G;
} }

View file

@ -142,15 +142,11 @@ int mt7921s_mac_reset(struct mt7921_dev *dev)
static void static void
mt7921s_write_txwi(struct mt7921_dev *dev, struct mt76_wcid *wcid, mt7921s_write_txwi(struct mt7921_dev *dev, struct mt76_wcid *wcid,
enum mt76_txq_id qid, struct ieee80211_sta *sta, enum mt76_txq_id qid, struct ieee80211_sta *sta,
struct ieee80211_key_conf *key, int pid,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); __le32 *txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE);
struct ieee80211_key_conf *key = info->control.hw_key;
__le32 *txwi;
int pid;
pid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb);
txwi = (__le32 *)(skb->data - MT_SDIO_TXD_SIZE);
memset(txwi, 0, MT_SDIO_TXD_SIZE); memset(txwi, 0, MT_SDIO_TXD_SIZE);
mt7921_mac_write_txwi(dev, txwi, skb, wcid, key, pid, false); mt7921_mac_write_txwi(dev, txwi, skb, wcid, key, pid, false);
skb_push(skb, MT_SDIO_TXD_SIZE); skb_push(skb, MT_SDIO_TXD_SIZE);
@ -163,8 +159,9 @@ int mt7921s_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
{ {
struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76); struct mt7921_dev *dev = container_of(mdev, struct mt7921_dev, mt76);
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb); struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx_info->skb);
struct ieee80211_key_conf *key = info->control.hw_key;
struct sk_buff *skb = tx_info->skb; struct sk_buff *skb = tx_info->skb;
int pad; int err, pad, pktid;
if (unlikely(tx_info->skb->len <= ETH_HLEN)) if (unlikely(tx_info->skb->len <= ETH_HLEN))
return -EINVAL; return -EINVAL;
@ -181,12 +178,18 @@ int mt7921s_tx_prepare_skb(struct mt76_dev *mdev, void *txwi_ptr,
} }
} }
mt7921s_write_txwi(dev, wcid, qid, sta, skb); pktid = mt76_tx_status_skb_add(&dev->mt76, wcid, skb);
mt7921s_write_txwi(dev, wcid, qid, sta, key, pktid, skb);
mt7921_skb_add_sdio_hdr(skb, MT7921_SDIO_DATA); mt7921_skb_add_sdio_hdr(skb, MT7921_SDIO_DATA);
pad = round_up(skb->len, 4) - skb->len; pad = round_up(skb->len, 4) - skb->len;
return mt76_skb_adjust_pad(skb, pad); err = mt76_skb_adjust_pad(skb, pad);
if (err)
/* Release pktid in case of error. */
idr_remove(&wcid->pktid, pktid);
return err;
} }
void mt7921s_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e) void mt7921s_tx_complete_skb(struct mt76_dev *mdev, struct mt76_queue_entry *e)

View file

@ -173,7 +173,7 @@ mt76_tx_status_skb_get(struct mt76_dev *dev, struct mt76_wcid *wcid, int pktid,
if (!(cb->flags & MT_TX_CB_DMA_DONE)) if (!(cb->flags & MT_TX_CB_DMA_DONE))
continue; continue;
if (!time_is_after_jiffies(cb->jiffies + if (time_is_after_jiffies(cb->jiffies +
MT_TX_STATUS_SKB_TIMEOUT)) MT_TX_STATUS_SKB_TIMEOUT))
continue; continue;
} }

View file

@ -25,6 +25,9 @@ static bool rt2x00usb_check_usb_error(struct rt2x00_dev *rt2x00dev, int status)
if (status == -ENODEV || status == -ENOENT) if (status == -ENODEV || status == -ENOENT)
return true; return true;
if (!test_bit(DEVICE_STATE_STARTED, &rt2x00dev->flags))
return false;
if (status == -EPROTO || status == -ETIMEDOUT) if (status == -EPROTO || status == -ETIMEDOUT)
rt2x00dev->num_proto_errs++; rt2x00dev->num_proto_errs++;
else else

View file

@ -91,7 +91,6 @@ static int rtw89_fw_hdr_parser(struct rtw89_dev *rtwdev, const u8 *fw, u32 len,
info->section_num = GET_FW_HDR_SEC_NUM(fw); info->section_num = GET_FW_HDR_SEC_NUM(fw);
info->hdr_len = RTW89_FW_HDR_SIZE + info->hdr_len = RTW89_FW_HDR_SIZE +
info->section_num * RTW89_FW_SECTION_HDR_SIZE; info->section_num * RTW89_FW_SECTION_HDR_SIZE;
SET_FW_HDR_PART_SIZE(fw, FWDL_SECTION_PER_PKT_LEN);
bin = fw + info->hdr_len; bin = fw + info->hdr_len;
@ -275,6 +274,7 @@ static int __rtw89_fw_download_hdr(struct rtw89_dev *rtwdev, const u8 *fw, u32 l
} }
skb_put_data(skb, fw, len); skb_put_data(skb, fw, len);
SET_FW_HDR_PART_SIZE(skb->data, FWDL_SECTION_PER_PKT_LEN);
rtw89_h2c_pkt_set_hdr_fwdl(rtwdev, skb, FWCMD_TYPE_H2C, rtw89_h2c_pkt_set_hdr_fwdl(rtwdev, skb, FWCMD_TYPE_H2C,
H2C_CAT_MAC, H2C_CL_MAC_FWDL, H2C_CAT_MAC, H2C_CL_MAC_FWDL,
H2C_FUNC_MAC_FWHDR_DL, len); H2C_FUNC_MAC_FWHDR_DL, len);

View file

@ -282,8 +282,10 @@ struct rtw89_h2creg_sch_tx_en {
le32_get_bits(*((__le32 *)(fwhdr) + 6), GENMASK(15, 8)) le32_get_bits(*((__le32 *)(fwhdr) + 6), GENMASK(15, 8))
#define GET_FW_HDR_CMD_VERSERION(fwhdr) \ #define GET_FW_HDR_CMD_VERSERION(fwhdr) \
le32_get_bits(*((__le32 *)(fwhdr) + 7), GENMASK(31, 24)) le32_get_bits(*((__le32 *)(fwhdr) + 7), GENMASK(31, 24))
#define SET_FW_HDR_PART_SIZE(fwhdr, val) \ static inline void SET_FW_HDR_PART_SIZE(void *fwhdr, u32 val)
le32p_replace_bits((__le32 *)(fwhdr) + 7, val, GENMASK(15, 0)) {
le32p_replace_bits((__le32 *)fwhdr + 7, val, GENMASK(15, 0));
}
#define SET_CTRL_INFO_MACID(table, val) \ #define SET_CTRL_INFO_MACID(table, val) \
le32p_replace_bits((__le32 *)(table) + 0, val, GENMASK(6, 0)) le32p_replace_bits((__le32 *)(table) + 0, val, GENMASK(6, 0))

View file

@ -434,6 +434,9 @@ static const struct usb_device_id usb_quirk_list[] = {
{ USB_DEVICE(0x1532, 0x0116), .driver_info = { USB_DEVICE(0x1532, 0x0116), .driver_info =
USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
/* Lenovo Powered USB-C Travel Hub (4X90S92381, RTL8153 GigE) */
{ USB_DEVICE(0x17ef, 0x721e), .driver_info = USB_QUIRK_NO_LPM },
/* Lenovo ThinkCenter A630Z TI024Gen3 usb-audio */ /* Lenovo ThinkCenter A630Z TI024Gen3 usb-audio */
{ USB_DEVICE(0x17ef, 0xa012), .driver_info = { USB_DEVICE(0x17ef, 0xa012), .driver_info =
USB_QUIRK_DISCONNECT_SUSPEND }, USB_QUIRK_DISCONNECT_SUSPEND },

View file

@ -9698,7 +9698,10 @@ struct mlx5_ifc_mcam_access_reg_bits {
u8 regs_84_to_68[0x11]; u8 regs_84_to_68[0x11];
u8 tracer_registers[0x4]; u8 tracer_registers[0x4];
u8 regs_63_to_32[0x20]; u8 regs_63_to_46[0x12];
u8 mrtc[0x1];
u8 regs_44_to_32[0xd];
u8 regs_31_to_0[0x20]; u8 regs_31_to_0[0x20];
}; };

View file

@ -4404,7 +4404,8 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu) static inline void __netif_tx_lock(struct netdev_queue *txq, int cpu)
{ {
spin_lock(&txq->_xmit_lock); spin_lock(&txq->_xmit_lock);
txq->xmit_lock_owner = cpu; /* Pairs with READ_ONCE() in __dev_queue_xmit() */
WRITE_ONCE(txq->xmit_lock_owner, cpu);
} }
static inline bool __netif_tx_acquire(struct netdev_queue *txq) static inline bool __netif_tx_acquire(struct netdev_queue *txq)
@ -4421,26 +4422,32 @@ static inline void __netif_tx_release(struct netdev_queue *txq)
static inline void __netif_tx_lock_bh(struct netdev_queue *txq) static inline void __netif_tx_lock_bh(struct netdev_queue *txq)
{ {
spin_lock_bh(&txq->_xmit_lock); spin_lock_bh(&txq->_xmit_lock);
txq->xmit_lock_owner = smp_processor_id(); /* Pairs with READ_ONCE() in __dev_queue_xmit() */
WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
} }
static inline bool __netif_tx_trylock(struct netdev_queue *txq) static inline bool __netif_tx_trylock(struct netdev_queue *txq)
{ {
bool ok = spin_trylock(&txq->_xmit_lock); bool ok = spin_trylock(&txq->_xmit_lock);
if (likely(ok))
txq->xmit_lock_owner = smp_processor_id(); if (likely(ok)) {
/* Pairs with READ_ONCE() in __dev_queue_xmit() */
WRITE_ONCE(txq->xmit_lock_owner, smp_processor_id());
}
return ok; return ok;
} }
static inline void __netif_tx_unlock(struct netdev_queue *txq) static inline void __netif_tx_unlock(struct netdev_queue *txq)
{ {
txq->xmit_lock_owner = -1; /* Pairs with READ_ONCE() in __dev_queue_xmit() */
WRITE_ONCE(txq->xmit_lock_owner, -1);
spin_unlock(&txq->_xmit_lock); spin_unlock(&txq->_xmit_lock);
} }
static inline void __netif_tx_unlock_bh(struct netdev_queue *txq) static inline void __netif_tx_unlock_bh(struct netdev_queue *txq)
{ {
txq->xmit_lock_owner = -1; /* Pairs with READ_ONCE() in __dev_queue_xmit() */
WRITE_ONCE(txq->xmit_lock_owner, -1);
spin_unlock_bh(&txq->_xmit_lock); spin_unlock_bh(&txq->_xmit_lock);
} }

View file

@ -27,9 +27,7 @@ static inline bool siphash_key_is_zero(const siphash_key_t *key)
} }
u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key); u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key); u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key);
#endif
u64 siphash_1u64(const u64 a, const siphash_key_t *key); u64 siphash_1u64(const u64 a, const siphash_key_t *key);
u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key); u64 siphash_2u64(const u64 a, const u64 b, const siphash_key_t *key);
@ -82,10 +80,9 @@ static inline u64 ___siphash_aligned(const __le64 *data, size_t len,
static inline u64 siphash(const void *data, size_t len, static inline u64 siphash(const void *data, size_t len,
const siphash_key_t *key) const siphash_key_t *key)
{ {
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
if (!IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT)) !IS_ALIGNED((unsigned long)data, SIPHASH_ALIGNMENT))
return __siphash_unaligned(data, len, key); return __siphash_unaligned(data, len, key);
#endif
return ___siphash_aligned(data, len, key); return ___siphash_aligned(data, len, key);
} }
@ -96,10 +93,8 @@ typedef struct {
u32 __hsiphash_aligned(const void *data, size_t len, u32 __hsiphash_aligned(const void *data, size_t len,
const hsiphash_key_t *key); const hsiphash_key_t *key);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key); const hsiphash_key_t *key);
#endif
u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key); u32 hsiphash_1u32(const u32 a, const hsiphash_key_t *key);
u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key); u32 hsiphash_2u32(const u32 a, const u32 b, const hsiphash_key_t *key);
@ -135,10 +130,9 @@ static inline u32 ___hsiphash_aligned(const __le32 *data, size_t len,
static inline u32 hsiphash(const void *data, size_t len, static inline u32 hsiphash(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS) ||
if (!IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT)) !IS_ALIGNED((unsigned long)data, HSIPHASH_ALIGNMENT))
return __hsiphash_unaligned(data, len, key); return __hsiphash_unaligned(data, len, key);
#endif
return ___hsiphash_aligned(data, len, key); return ___hsiphash_aligned(data, len, key);
} }

View file

@ -133,7 +133,7 @@ static inline void sk_mark_napi_id(struct sock *sk, const struct sk_buff *skb)
if (unlikely(READ_ONCE(sk->sk_napi_id) != skb->napi_id)) if (unlikely(READ_ONCE(sk->sk_napi_id) != skb->napi_id))
WRITE_ONCE(sk->sk_napi_id, skb->napi_id); WRITE_ONCE(sk->sk_napi_id, skb->napi_id);
#endif #endif
sk_rx_queue_set(sk, skb); sk_rx_queue_update(sk, skb);
} }
static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id) static inline void __sk_mark_napi_id_once(struct sock *sk, unsigned int napi_id)

View file

@ -79,6 +79,17 @@ static inline void dst_cache_reset(struct dst_cache *dst_cache)
dst_cache->reset_ts = jiffies; dst_cache->reset_ts = jiffies;
} }
/**
* dst_cache_reset_now - invalidate the cache contents immediately
* @dst_cache: the cache
*
* The caller must be sure there are no concurrent users, as this frees
* all dst_cache users immediately, rather than waiting for the next
* per-cpu usage like dst_cache_reset does. Most callers should use the
* higher speed lazily-freed dst_cache_reset function instead.
*/
void dst_cache_reset_now(struct dst_cache *dst_cache);
/** /**
* dst_cache_init - initialize the cache, allocating the required storage * dst_cache_init - initialize the cache, allocating the required storage
* @dst_cache: the cache * @dst_cache: the cache

View file

@ -69,7 +69,7 @@ struct fib_rules_ops {
int (*action)(struct fib_rule *, int (*action)(struct fib_rule *,
struct flowi *, int, struct flowi *, int,
struct fib_lookup_arg *); struct fib_lookup_arg *);
bool (*suppress)(struct fib_rule *, bool (*suppress)(struct fib_rule *, int,
struct fib_lookup_arg *); struct fib_lookup_arg *);
int (*match)(struct fib_rule *, int (*match)(struct fib_rule *,
struct flowi *, int); struct flowi *, int);
@ -218,7 +218,9 @@ INDIRECT_CALLABLE_DECLARE(int fib4_rule_action(struct fib_rule *rule,
struct fib_lookup_arg *arg)); struct fib_lookup_arg *arg));
INDIRECT_CALLABLE_DECLARE(bool fib6_rule_suppress(struct fib_rule *rule, INDIRECT_CALLABLE_DECLARE(bool fib6_rule_suppress(struct fib_rule *rule,
int flags,
struct fib_lookup_arg *arg)); struct fib_lookup_arg *arg));
INDIRECT_CALLABLE_DECLARE(bool fib4_rule_suppress(struct fib_rule *rule, INDIRECT_CALLABLE_DECLARE(bool fib4_rule_suppress(struct fib_rule *rule,
int flags,
struct fib_lookup_arg *arg)); struct fib_lookup_arg *arg));
#endif #endif

View file

@ -438,7 +438,7 @@ int fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
static inline int fib_num_tclassid_users(struct net *net) static inline int fib_num_tclassid_users(struct net *net)
{ {
return net->ipv4.fib_num_tclassid_users; return atomic_read(&net->ipv4.fib_num_tclassid_users);
} }
#else #else
static inline int fib_num_tclassid_users(struct net *net) static inline int fib_num_tclassid_users(struct net *net)

View file

@ -65,7 +65,7 @@ struct netns_ipv4 {
bool fib_has_custom_local_routes; bool fib_has_custom_local_routes;
bool fib_offload_disabled; bool fib_offload_disabled;
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
int fib_num_tclassid_users; atomic_t fib_num_tclassid_users;
#endif #endif
struct hlist_head *fib_table_hash; struct hlist_head *fib_table_hash;
struct sock *fibnl; struct sock *fibnl;

View file

@ -1913,18 +1913,31 @@ static inline int sk_tx_queue_get(const struct sock *sk)
return -1; return -1;
} }
static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb) static inline void __sk_rx_queue_set(struct sock *sk,
const struct sk_buff *skb,
bool force_set)
{ {
#ifdef CONFIG_SOCK_RX_QUEUE_MAPPING #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING
if (skb_rx_queue_recorded(skb)) { if (skb_rx_queue_recorded(skb)) {
u16 rx_queue = skb_get_rx_queue(skb); u16 rx_queue = skb_get_rx_queue(skb);
if (unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue)) if (force_set ||
unlikely(READ_ONCE(sk->sk_rx_queue_mapping) != rx_queue))
WRITE_ONCE(sk->sk_rx_queue_mapping, rx_queue); WRITE_ONCE(sk->sk_rx_queue_mapping, rx_queue);
} }
#endif #endif
} }
static inline void sk_rx_queue_set(struct sock *sk, const struct sk_buff *skb)
{
__sk_rx_queue_set(sk, skb, true);
}
static inline void sk_rx_queue_update(struct sock *sk, const struct sk_buff *skb)
{
__sk_rx_queue_set(sk, skb, false);
}
static inline void sk_rx_queue_clear(struct sock *sk) static inline void sk_rx_queue_clear(struct sock *sk)
{ {
#ifdef CONFIG_SOCK_RX_QUEUE_MAPPING #ifdef CONFIG_SOCK_RX_QUEUE_MAPPING
@ -2430,19 +2443,22 @@ static inline void sk_stream_moderate_sndbuf(struct sock *sk)
* @sk: socket * @sk: socket
* *
* Use the per task page_frag instead of the per socket one for * Use the per task page_frag instead of the per socket one for
* optimization when we know that we're in the normal context and owns * optimization when we know that we're in process context and own
* everything that's associated with %current. * everything that's associated with %current.
* *
* gfpflags_allow_blocking() isn't enough here as direct reclaim may nest * Both direct reclaim and page faults can nest inside other
* inside other socket operations and end up recursing into sk_page_frag() * socket operations and end up recursing into sk_page_frag()
* while it's already in use. * while it's already in use: explicitly avoid task page_frag
* usage if the caller is potentially doing any of them.
* This assumes that page fault handlers use the GFP_NOFS flags.
* *
* Return: a per task page_frag if context allows that, * Return: a per task page_frag if context allows that,
* otherwise a per socket one. * otherwise a per socket one.
*/ */
static inline struct page_frag *sk_page_frag(struct sock *sk) static inline struct page_frag *sk_page_frag(struct sock *sk)
{ {
if (gfpflags_normal_context(sk->sk_allocation)) if ((sk->sk_allocation & (__GFP_DIRECT_RECLAIM | __GFP_MEMALLOC | __GFP_FS)) ==
(__GFP_DIRECT_RECLAIM | __GFP_FS))
return &current->task_frag; return &current->task_frag;
return &sk->sk_frag; return &sk->sk_frag;

View file

@ -117,7 +117,7 @@
#define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */ #define ETH_P_IFE 0xED3E /* ForCES inter-FE LFB type */
#define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */ #define ETH_P_AF_IUCV 0xFBFB /* IBM af_iucv [ NOT AN OFFICIALLY REGISTERED ID ] */
#define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is less than this value #define ETH_P_802_3_MIN 0x0600 /* If the value in the ethernet type is more than this value
* then the frame is Ethernet II. Else it is 802.3 */ * then the frame is Ethernet II. Else it is 802.3 */
/* /*

View file

@ -49,6 +49,7 @@
SIPROUND; \ SIPROUND; \
return (v0 ^ v1) ^ (v2 ^ v3); return (v0 ^ v1) ^ (v2 ^ v3);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key) u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
@ -80,8 +81,8 @@ u64 __siphash_aligned(const void *data, size_t len, const siphash_key_t *key)
POSTAMBLE POSTAMBLE
} }
EXPORT_SYMBOL(__siphash_aligned); EXPORT_SYMBOL(__siphash_aligned);
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key) u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
@ -113,7 +114,6 @@ u64 __siphash_unaligned(const void *data, size_t len, const siphash_key_t *key)
POSTAMBLE POSTAMBLE
} }
EXPORT_SYMBOL(__siphash_unaligned); EXPORT_SYMBOL(__siphash_unaligned);
#endif
/** /**
* siphash_1u64 - compute 64-bit siphash PRF value of a u64 * siphash_1u64 - compute 64-bit siphash PRF value of a u64
@ -250,6 +250,7 @@ EXPORT_SYMBOL(siphash_3u32);
HSIPROUND; \ HSIPROUND; \
return (v0 ^ v1) ^ (v2 ^ v3); return (v0 ^ v1) ^ (v2 ^ v3);
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u64)); const u8 *end = data + len - (len % sizeof(u64));
@ -280,8 +281,8 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
HPOSTAMBLE HPOSTAMBLE
} }
EXPORT_SYMBOL(__hsiphash_aligned); EXPORT_SYMBOL(__hsiphash_aligned);
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
@ -313,7 +314,6 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
HPOSTAMBLE HPOSTAMBLE
} }
EXPORT_SYMBOL(__hsiphash_unaligned); EXPORT_SYMBOL(__hsiphash_unaligned);
#endif
/** /**
* hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32 * hsiphash_1u32 - compute 64-bit hsiphash PRF value of a u32
@ -418,6 +418,7 @@ EXPORT_SYMBOL(hsiphash_4u32);
HSIPROUND; \ HSIPROUND; \
return v1 ^ v3; return v1 ^ v3;
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key) u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
{ {
const u8 *end = data + len - (len % sizeof(u32)); const u8 *end = data + len - (len % sizeof(u32));
@ -438,8 +439,8 @@ u32 __hsiphash_aligned(const void *data, size_t len, const hsiphash_key_t *key)
HPOSTAMBLE HPOSTAMBLE
} }
EXPORT_SYMBOL(__hsiphash_aligned); EXPORT_SYMBOL(__hsiphash_aligned);
#endif
#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
u32 __hsiphash_unaligned(const void *data, size_t len, u32 __hsiphash_unaligned(const void *data, size_t len,
const hsiphash_key_t *key) const hsiphash_key_t *key)
{ {
@ -461,7 +462,6 @@ u32 __hsiphash_unaligned(const void *data, size_t len,
HPOSTAMBLE HPOSTAMBLE
} }
EXPORT_SYMBOL(__hsiphash_unaligned); EXPORT_SYMBOL(__hsiphash_unaligned);
#endif
/** /**
* hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32 * hsiphash_1u32 - compute 32-bit hsiphash PRF value of a u32

View file

@ -4210,7 +4210,10 @@ static int __dev_queue_xmit(struct sk_buff *skb, struct net_device *sb_dev)
if (dev->flags & IFF_UP) { if (dev->flags & IFF_UP) {
int cpu = smp_processor_id(); /* ok because BHs are off */ int cpu = smp_processor_id(); /* ok because BHs are off */
if (txq->xmit_lock_owner != cpu) { /* Other cpus might concurrently change txq->xmit_lock_owner
* to -1 or to their cpu id, but not to our id.
*/
if (READ_ONCE(txq->xmit_lock_owner) != cpu) {
if (dev_xmit_recursion()) if (dev_xmit_recursion())
goto recursion_alert; goto recursion_alert;

View file

@ -162,3 +162,22 @@ void dst_cache_destroy(struct dst_cache *dst_cache)
free_percpu(dst_cache->cache); free_percpu(dst_cache->cache);
} }
EXPORT_SYMBOL_GPL(dst_cache_destroy); EXPORT_SYMBOL_GPL(dst_cache_destroy);
void dst_cache_reset_now(struct dst_cache *dst_cache)
{
int i;
if (!dst_cache->cache)
return;
dst_cache->reset_ts = jiffies;
for_each_possible_cpu(i) {
struct dst_cache_pcpu *idst = per_cpu_ptr(dst_cache->cache, i);
struct dst_entry *dst = idst->dst;
idst->cookie = 0;
idst->dst = NULL;
dst_release(dst);
}
}
EXPORT_SYMBOL_GPL(dst_cache_reset_now);

View file

@ -323,7 +323,7 @@ int fib_rules_lookup(struct fib_rules_ops *ops, struct flowi *fl,
if (!err && ops->suppress && INDIRECT_CALL_MT(ops->suppress, if (!err && ops->suppress && INDIRECT_CALL_MT(ops->suppress,
fib6_rule_suppress, fib6_rule_suppress,
fib4_rule_suppress, fib4_rule_suppress,
rule, arg)) rule, flags, arg))
continue; continue;
if (err != -EAGAIN) { if (err != -EAGAIN) {

View file

@ -1582,7 +1582,7 @@ static int __net_init fib_net_init(struct net *net)
int error; int error;
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
net->ipv4.fib_num_tclassid_users = 0; atomic_set(&net->ipv4.fib_num_tclassid_users, 0);
#endif #endif
error = ip_fib_net_init(net); error = ip_fib_net_init(net);
if (error < 0) if (error < 0)

View file

@ -141,6 +141,7 @@ INDIRECT_CALLABLE_SCOPE int fib4_rule_action(struct fib_rule *rule,
} }
INDIRECT_CALLABLE_SCOPE bool fib4_rule_suppress(struct fib_rule *rule, INDIRECT_CALLABLE_SCOPE bool fib4_rule_suppress(struct fib_rule *rule,
int flags,
struct fib_lookup_arg *arg) struct fib_lookup_arg *arg)
{ {
struct fib_result *result = (struct fib_result *) arg->result; struct fib_result *result = (struct fib_result *) arg->result;
@ -263,7 +264,7 @@ static int fib4_rule_configure(struct fib_rule *rule, struct sk_buff *skb,
if (tb[FRA_FLOW]) { if (tb[FRA_FLOW]) {
rule4->tclassid = nla_get_u32(tb[FRA_FLOW]); rule4->tclassid = nla_get_u32(tb[FRA_FLOW]);
if (rule4->tclassid) if (rule4->tclassid)
net->ipv4.fib_num_tclassid_users++; atomic_inc(&net->ipv4.fib_num_tclassid_users);
} }
#endif #endif
@ -295,7 +296,7 @@ static int fib4_rule_delete(struct fib_rule *rule)
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
if (((struct fib4_rule *)rule)->tclassid) if (((struct fib4_rule *)rule)->tclassid)
net->ipv4.fib_num_tclassid_users--; atomic_dec(&net->ipv4.fib_num_tclassid_users);
#endif #endif
net->ipv4.fib_has_custom_rules = true; net->ipv4.fib_has_custom_rules = true;

View file

@ -220,7 +220,7 @@ void fib_nh_release(struct net *net, struct fib_nh *fib_nh)
{ {
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
if (fib_nh->nh_tclassid) if (fib_nh->nh_tclassid)
net->ipv4.fib_num_tclassid_users--; atomic_dec(&net->ipv4.fib_num_tclassid_users);
#endif #endif
fib_nh_common_release(&fib_nh->nh_common); fib_nh_common_release(&fib_nh->nh_common);
} }
@ -632,7 +632,7 @@ int fib_nh_init(struct net *net, struct fib_nh *nh,
#ifdef CONFIG_IP_ROUTE_CLASSID #ifdef CONFIG_IP_ROUTE_CLASSID
nh->nh_tclassid = cfg->fc_flow; nh->nh_tclassid = cfg->fc_flow;
if (nh->nh_tclassid) if (nh->nh_tclassid)
net->ipv4.fib_num_tclassid_users++; atomic_inc(&net->ipv4.fib_num_tclassid_users);
#endif #endif
#ifdef CONFIG_IP_ROUTE_MULTIPATH #ifdef CONFIG_IP_ROUTE_MULTIPATH
nh->fib_nh_weight = nh_weight; nh->fib_nh_weight = nh_weight;

View file

@ -267,6 +267,7 @@ INDIRECT_CALLABLE_SCOPE int fib6_rule_action(struct fib_rule *rule,
} }
INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule, INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule,
int flags,
struct fib_lookup_arg *arg) struct fib_lookup_arg *arg)
{ {
struct fib6_result *res = arg->result; struct fib6_result *res = arg->result;
@ -294,8 +295,7 @@ INDIRECT_CALLABLE_SCOPE bool fib6_rule_suppress(struct fib_rule *rule,
return false; return false;
suppress_route: suppress_route:
if (!(arg->flags & FIB_LOOKUP_NOREF)) ip6_rt_put_flags(rt, flags);
ip6_rt_put(rt);
return true; return true;
} }

View file

@ -248,9 +248,9 @@ INDIRECT_CALLABLE_SCOPE struct sk_buff *ipv6_gro_receive(struct list_head *head,
* memcmp() alone below is sufficient, right? * memcmp() alone below is sufficient, right?
*/ */
if ((first_word & htonl(0xF00FFFFF)) || if ((first_word & htonl(0xF00FFFFF)) ||
!ipv6_addr_equal(&iph->saddr, &iph2->saddr) || !ipv6_addr_equal(&iph->saddr, &iph2->saddr) ||
!ipv6_addr_equal(&iph->daddr, &iph2->daddr) || !ipv6_addr_equal(&iph->daddr, &iph2->daddr) ||
*(u16 *)&iph->nexthdr != *(u16 *)&iph2->nexthdr) { *(u16 *)&iph->nexthdr != *(u16 *)&iph2->nexthdr) {
not_same_flow: not_same_flow:
NAPI_GRO_CB(p)->same_flow = 0; NAPI_GRO_CB(p)->same_flow = 0;
continue; continue;

View file

@ -952,7 +952,7 @@ static int mctp_route_add(struct mctp_dev *mdev, mctp_eid_t daddr_start,
} }
static int mctp_route_remove(struct mctp_dev *mdev, mctp_eid_t daddr_start, static int mctp_route_remove(struct mctp_dev *mdev, mctp_eid_t daddr_start,
unsigned int daddr_extent) unsigned int daddr_extent, unsigned char type)
{ {
struct net *net = dev_net(mdev->dev); struct net *net = dev_net(mdev->dev);
struct mctp_route *rt, *tmp; struct mctp_route *rt, *tmp;
@ -969,7 +969,8 @@ static int mctp_route_remove(struct mctp_dev *mdev, mctp_eid_t daddr_start,
list_for_each_entry_safe(rt, tmp, &net->mctp.routes, list) { list_for_each_entry_safe(rt, tmp, &net->mctp.routes, list) {
if (rt->dev == mdev && if (rt->dev == mdev &&
rt->min == daddr_start && rt->max == daddr_end) { rt->min == daddr_start && rt->max == daddr_end &&
rt->type == type) {
list_del_rcu(&rt->list); list_del_rcu(&rt->list);
/* TODO: immediate RTM_DELROUTE */ /* TODO: immediate RTM_DELROUTE */
mctp_route_release(rt); mctp_route_release(rt);
@ -987,7 +988,7 @@ int mctp_route_add_local(struct mctp_dev *mdev, mctp_eid_t addr)
int mctp_route_remove_local(struct mctp_dev *mdev, mctp_eid_t addr) int mctp_route_remove_local(struct mctp_dev *mdev, mctp_eid_t addr)
{ {
return mctp_route_remove(mdev, addr, 0); return mctp_route_remove(mdev, addr, 0, RTN_LOCAL);
} }
/* removes all entries for a given device */ /* removes all entries for a given device */
@ -1195,7 +1196,7 @@ static int mctp_delroute(struct sk_buff *skb, struct nlmsghdr *nlh,
if (rtm->rtm_type != RTN_UNICAST) if (rtm->rtm_type != RTN_UNICAST)
return -EINVAL; return -EINVAL;
rc = mctp_route_remove(mdev, daddr_start, rtm->rtm_dst_len); rc = mctp_route_remove(mdev, daddr_start, rtm->rtm_dst_len, RTN_UNICAST);
return rc; return rc;
} }

View file

@ -12,7 +12,7 @@
static netdev_tx_t mctp_test_dev_tx(struct sk_buff *skb, static netdev_tx_t mctp_test_dev_tx(struct sk_buff *skb,
struct net_device *ndev) struct net_device *ndev)
{ {
kfree(skb); kfree_skb(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }

View file

@ -409,7 +409,7 @@ static int mpls_forward(struct sk_buff *skb, struct net_device *dev,
goto err; goto err;
/* Find the output device */ /* Find the output device */
out_dev = rcu_dereference(nh->nh_dev); out_dev = nh->nh_dev;
if (!mpls_output_possible(out_dev)) if (!mpls_output_possible(out_dev))
goto tx_err; goto tx_err;
@ -698,7 +698,7 @@ static int mpls_nh_assign_dev(struct net *net, struct mpls_route *rt,
(dev->addr_len != nh->nh_via_alen)) (dev->addr_len != nh->nh_via_alen))
goto errout; goto errout;
RCU_INIT_POINTER(nh->nh_dev, dev); nh->nh_dev = dev;
if (!(dev->flags & IFF_UP)) { if (!(dev->flags & IFF_UP)) {
nh->nh_flags |= RTNH_F_DEAD; nh->nh_flags |= RTNH_F_DEAD;
@ -1491,26 +1491,53 @@ static void mpls_dev_destroy_rcu(struct rcu_head *head)
kfree(mdev); kfree(mdev);
} }
static void mpls_ifdown(struct net_device *dev, int event) static int mpls_ifdown(struct net_device *dev, int event)
{ {
struct mpls_route __rcu **platform_label; struct mpls_route __rcu **platform_label;
struct net *net = dev_net(dev); struct net *net = dev_net(dev);
u8 alive, deleted;
unsigned index; unsigned index;
platform_label = rtnl_dereference(net->mpls.platform_label); platform_label = rtnl_dereference(net->mpls.platform_label);
for (index = 0; index < net->mpls.platform_labels; index++) { for (index = 0; index < net->mpls.platform_labels; index++) {
struct mpls_route *rt = rtnl_dereference(platform_label[index]); struct mpls_route *rt = rtnl_dereference(platform_label[index]);
bool nh_del = false;
u8 alive = 0;
if (!rt) if (!rt)
continue; continue;
alive = 0; if (event == NETDEV_UNREGISTER) {
deleted = 0; u8 deleted = 0;
for_nexthops(rt) {
if (!nh->nh_dev || nh->nh_dev == dev)
deleted++;
if (nh->nh_dev == dev)
nh_del = true;
} endfor_nexthops(rt);
/* if there are no more nexthops, delete the route */
if (deleted == rt->rt_nhn) {
mpls_route_update(net, index, NULL, NULL);
continue;
}
if (nh_del) {
size_t size = sizeof(*rt) + rt->rt_nhn *
rt->rt_nh_size;
struct mpls_route *orig = rt;
rt = kmalloc(size, GFP_KERNEL);
if (!rt)
return -ENOMEM;
memcpy(rt, orig, size);
}
}
change_nexthops(rt) { change_nexthops(rt) {
unsigned int nh_flags = nh->nh_flags; unsigned int nh_flags = nh->nh_flags;
if (rtnl_dereference(nh->nh_dev) != dev) if (nh->nh_dev != dev)
goto next; goto next;
switch (event) { switch (event) {
@ -1523,23 +1550,22 @@ static void mpls_ifdown(struct net_device *dev, int event)
break; break;
} }
if (event == NETDEV_UNREGISTER) if (event == NETDEV_UNREGISTER)
RCU_INIT_POINTER(nh->nh_dev, NULL); nh->nh_dev = NULL;
if (nh->nh_flags != nh_flags) if (nh->nh_flags != nh_flags)
WRITE_ONCE(nh->nh_flags, nh_flags); WRITE_ONCE(nh->nh_flags, nh_flags);
next: next:
if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))) if (!(nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN)))
alive++; alive++;
if (!rtnl_dereference(nh->nh_dev))
deleted++;
} endfor_nexthops(rt); } endfor_nexthops(rt);
WRITE_ONCE(rt->rt_nhn_alive, alive); WRITE_ONCE(rt->rt_nhn_alive, alive);
/* if there are no more nexthops, delete the route */ if (nh_del)
if (event == NETDEV_UNREGISTER && deleted == rt->rt_nhn) mpls_route_update(net, index, rt, NULL);
mpls_route_update(net, index, NULL, NULL);
} }
return 0;
} }
static void mpls_ifup(struct net_device *dev, unsigned int flags) static void mpls_ifup(struct net_device *dev, unsigned int flags)
@ -1559,14 +1585,12 @@ static void mpls_ifup(struct net_device *dev, unsigned int flags)
alive = 0; alive = 0;
change_nexthops(rt) { change_nexthops(rt) {
unsigned int nh_flags = nh->nh_flags; unsigned int nh_flags = nh->nh_flags;
struct net_device *nh_dev =
rtnl_dereference(nh->nh_dev);
if (!(nh_flags & flags)) { if (!(nh_flags & flags)) {
alive++; alive++;
continue; continue;
} }
if (nh_dev != dev) if (nh->nh_dev != dev)
continue; continue;
alive++; alive++;
nh_flags &= ~flags; nh_flags &= ~flags;
@ -1597,8 +1621,12 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
return NOTIFY_OK; return NOTIFY_OK;
switch (event) { switch (event) {
int err;
case NETDEV_DOWN: case NETDEV_DOWN:
mpls_ifdown(dev, event); err = mpls_ifdown(dev, event);
if (err)
return notifier_from_errno(err);
break; break;
case NETDEV_UP: case NETDEV_UP:
flags = dev_get_flags(dev); flags = dev_get_flags(dev);
@ -1609,13 +1637,18 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
break; break;
case NETDEV_CHANGE: case NETDEV_CHANGE:
flags = dev_get_flags(dev); flags = dev_get_flags(dev);
if (flags & (IFF_RUNNING | IFF_LOWER_UP)) if (flags & (IFF_RUNNING | IFF_LOWER_UP)) {
mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN); mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);
else } else {
mpls_ifdown(dev, event); err = mpls_ifdown(dev, event);
if (err)
return notifier_from_errno(err);
}
break; break;
case NETDEV_UNREGISTER: case NETDEV_UNREGISTER:
mpls_ifdown(dev, event); err = mpls_ifdown(dev, event);
if (err)
return notifier_from_errno(err);
mdev = mpls_dev_get(dev); mdev = mpls_dev_get(dev);
if (mdev) { if (mdev) {
mpls_dev_sysctl_unregister(dev, mdev); mpls_dev_sysctl_unregister(dev, mdev);
@ -1626,8 +1659,6 @@ static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
case NETDEV_CHANGENAME: case NETDEV_CHANGENAME:
mdev = mpls_dev_get(dev); mdev = mpls_dev_get(dev);
if (mdev) { if (mdev) {
int err;
mpls_dev_sysctl_unregister(dev, mdev); mpls_dev_sysctl_unregister(dev, mdev);
err = mpls_dev_sysctl_register(dev, mdev); err = mpls_dev_sysctl_register(dev, mdev);
if (err) if (err)
@ -1994,7 +2025,7 @@ static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh), nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),
nh->nh_via_alen)) nh->nh_via_alen))
goto nla_put_failure; goto nla_put_failure;
dev = rtnl_dereference(nh->nh_dev); dev = nh->nh_dev;
if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex)) if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))
goto nla_put_failure; goto nla_put_failure;
if (nh->nh_flags & RTNH_F_LINKDOWN) if (nh->nh_flags & RTNH_F_LINKDOWN)
@ -2012,7 +2043,7 @@ static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
goto nla_put_failure; goto nla_put_failure;
for_nexthops(rt) { for_nexthops(rt) {
dev = rtnl_dereference(nh->nh_dev); dev = nh->nh_dev;
if (!dev) if (!dev)
continue; continue;
@ -2123,18 +2154,14 @@ static int mpls_valid_fib_dump_req(struct net *net, const struct nlmsghdr *nlh,
static bool mpls_rt_uses_dev(struct mpls_route *rt, static bool mpls_rt_uses_dev(struct mpls_route *rt,
const struct net_device *dev) const struct net_device *dev)
{ {
struct net_device *nh_dev;
if (rt->rt_nhn == 1) { if (rt->rt_nhn == 1) {
struct mpls_nh *nh = rt->rt_nh; struct mpls_nh *nh = rt->rt_nh;
nh_dev = rtnl_dereference(nh->nh_dev); if (nh->nh_dev == dev)
if (dev == nh_dev)
return true; return true;
} else { } else {
for_nexthops(rt) { for_nexthops(rt) {
nh_dev = rtnl_dereference(nh->nh_dev); if (nh->nh_dev == dev)
if (nh_dev == dev)
return true; return true;
} endfor_nexthops(rt); } endfor_nexthops(rt);
} }
@ -2222,7 +2249,7 @@ static inline size_t lfib_nlmsg_size(struct mpls_route *rt)
size_t nhsize = 0; size_t nhsize = 0;
for_nexthops(rt) { for_nexthops(rt) {
if (!rtnl_dereference(nh->nh_dev)) if (!nh->nh_dev)
continue; continue;
nhsize += nla_total_size(sizeof(struct rtnexthop)); nhsize += nla_total_size(sizeof(struct rtnexthop));
/* RTA_VIA */ /* RTA_VIA */
@ -2468,7 +2495,7 @@ static int mpls_getroute(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh), nla_put_via(skb, nh->nh_via_table, mpls_nh_via(rt, nh),
nh->nh_via_alen)) nh->nh_via_alen))
goto nla_put_failure; goto nla_put_failure;
dev = rtnl_dereference(nh->nh_dev); dev = nh->nh_dev;
if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex)) if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))
goto nla_put_failure; goto nla_put_failure;
@ -2507,7 +2534,7 @@ static int resize_platform_label_table(struct net *net, size_t limit)
rt0 = mpls_rt_alloc(1, lo->addr_len, 0); rt0 = mpls_rt_alloc(1, lo->addr_len, 0);
if (IS_ERR(rt0)) if (IS_ERR(rt0))
goto nort0; goto nort0;
RCU_INIT_POINTER(rt0->rt_nh->nh_dev, lo); rt0->rt_nh->nh_dev = lo;
rt0->rt_protocol = RTPROT_KERNEL; rt0->rt_protocol = RTPROT_KERNEL;
rt0->rt_payload_type = MPT_IPV4; rt0->rt_payload_type = MPT_IPV4;
rt0->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT; rt0->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;
@ -2521,7 +2548,7 @@ static int resize_platform_label_table(struct net *net, size_t limit)
rt2 = mpls_rt_alloc(1, lo->addr_len, 0); rt2 = mpls_rt_alloc(1, lo->addr_len, 0);
if (IS_ERR(rt2)) if (IS_ERR(rt2))
goto nort2; goto nort2;
RCU_INIT_POINTER(rt2->rt_nh->nh_dev, lo); rt2->rt_nh->nh_dev = lo;
rt2->rt_protocol = RTPROT_KERNEL; rt2->rt_protocol = RTPROT_KERNEL;
rt2->rt_payload_type = MPT_IPV6; rt2->rt_payload_type = MPT_IPV6;
rt2->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT; rt2->rt_ttl_propagate = MPLS_TTL_PROP_DEFAULT;

View file

@ -87,7 +87,7 @@ enum mpls_payload_type {
}; };
struct mpls_nh { /* next hop label forwarding entry */ struct mpls_nh { /* next hop label forwarding entry */
struct net_device __rcu *nh_dev; struct net_device *nh_dev;
/* nh_flags is accessed under RCU in the packet path; it is /* nh_flags is accessed under RCU in the packet path; it is
* modified handling netdev events with rtnl lock held * modified handling netdev events with rtnl lock held

View file

@ -1852,6 +1852,11 @@ static int netlink_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
if (msg->msg_flags & MSG_OOB) if (msg->msg_flags & MSG_OOB)
return -EOPNOTSUPP; return -EOPNOTSUPP;
if (len == 0) {
pr_warn_once("Zero length message leads to an empty skb\n");
return -ENODATA;
}
err = scm_send(sock, msg, &scm, true); err = scm_send(sock, msg, &scm, true);
if (err < 0) if (err < 0)
return err; return err;

View file

@ -500,7 +500,7 @@ void rds_tcp_tune(struct socket *sock)
sk->sk_userlocks |= SOCK_SNDBUF_LOCK; sk->sk_userlocks |= SOCK_SNDBUF_LOCK;
} }
if (rtn->rcvbuf_size > 0) { if (rtn->rcvbuf_size > 0) {
sk->sk_sndbuf = rtn->rcvbuf_size; sk->sk_rcvbuf = rtn->rcvbuf_size;
sk->sk_userlocks |= SOCK_RCVBUF_LOCK; sk->sk_userlocks |= SOCK_RCVBUF_LOCK;
} }
release_sock(sk); release_sock(sk);

View file

@ -135,16 +135,20 @@ struct rxrpc_bundle *rxrpc_get_bundle(struct rxrpc_bundle *bundle)
return bundle; return bundle;
} }
static void rxrpc_free_bundle(struct rxrpc_bundle *bundle)
{
rxrpc_put_peer(bundle->params.peer);
kfree(bundle);
}
void rxrpc_put_bundle(struct rxrpc_bundle *bundle) void rxrpc_put_bundle(struct rxrpc_bundle *bundle)
{ {
unsigned int d = bundle->debug_id; unsigned int d = bundle->debug_id;
unsigned int u = atomic_dec_return(&bundle->usage); unsigned int u = atomic_dec_return(&bundle->usage);
_debug("PUT B=%x %u", d, u); _debug("PUT B=%x %u", d, u);
if (u == 0) { if (u == 0)
rxrpc_put_peer(bundle->params.peer); rxrpc_free_bundle(bundle);
kfree(bundle);
}
} }
/* /*
@ -328,7 +332,7 @@ static struct rxrpc_bundle *rxrpc_look_up_bundle(struct rxrpc_conn_parameters *c
return candidate; return candidate;
found_bundle_free: found_bundle_free:
kfree(candidate); rxrpc_free_bundle(candidate);
found_bundle: found_bundle:
rxrpc_get_bundle(bundle); rxrpc_get_bundle(bundle);
spin_unlock(&local->client_bundles_lock); spin_unlock(&local->client_bundles_lock);

View file

@ -299,6 +299,12 @@ static struct rxrpc_peer *rxrpc_create_peer(struct rxrpc_sock *rx,
return peer; return peer;
} }
static void rxrpc_free_peer(struct rxrpc_peer *peer)
{
rxrpc_put_local(peer->local);
kfree_rcu(peer, rcu);
}
/* /*
* Set up a new incoming peer. There shouldn't be any other matching peers * Set up a new incoming peer. There shouldn't be any other matching peers
* since we've already done a search in the list from the non-reentrant context * since we've already done a search in the list from the non-reentrant context
@ -365,7 +371,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_sock *rx,
spin_unlock_bh(&rxnet->peer_hash_lock); spin_unlock_bh(&rxnet->peer_hash_lock);
if (peer) if (peer)
kfree(candidate); rxrpc_free_peer(candidate);
else else
peer = candidate; peer = candidate;
} }
@ -420,8 +426,7 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
list_del_init(&peer->keepalive_link); list_del_init(&peer->keepalive_link);
spin_unlock_bh(&rxnet->peer_hash_lock); spin_unlock_bh(&rxnet->peer_hash_lock);
rxrpc_put_local(peer->local); rxrpc_free_peer(peer);
kfree_rcu(peer, rcu);
} }
/* /*
@ -457,8 +462,7 @@ void rxrpc_put_peer_locked(struct rxrpc_peer *peer)
if (n == 0) { if (n == 0) {
hash_del_rcu(&peer->hash_link); hash_del_rcu(&peer->hash_link);
list_del_init(&peer->keepalive_link); list_del_init(&peer->keepalive_link);
rxrpc_put_local(peer->local); rxrpc_free_peer(peer);
kfree_rcu(peer, rcu);
} }
} }

View file

@ -195,6 +195,7 @@ int smc_close_active(struct smc_sock *smc)
int old_state; int old_state;
long timeout; long timeout;
int rc = 0; int rc = 0;
int rc1 = 0;
timeout = current->flags & PF_EXITING ? timeout = current->flags & PF_EXITING ?
0 : sock_flag(sk, SOCK_LINGER) ? 0 : sock_flag(sk, SOCK_LINGER) ?
@ -232,8 +233,11 @@ int smc_close_active(struct smc_sock *smc)
/* actively shutdown clcsock before peer close it, /* actively shutdown clcsock before peer close it,
* prevent peer from entering TIME_WAIT state. * prevent peer from entering TIME_WAIT state.
*/ */
if (smc->clcsock && smc->clcsock->sk) if (smc->clcsock && smc->clcsock->sk) {
rc = kernel_sock_shutdown(smc->clcsock, SHUT_RDWR); rc1 = kernel_sock_shutdown(smc->clcsock,
SHUT_RDWR);
rc = rc ? rc : rc1;
}
} else { } else {
/* peer event has changed the state */ /* peer event has changed the state */
goto again; goto again;

View file

@ -625,18 +625,17 @@ int smcd_nl_get_lgr(struct sk_buff *skb, struct netlink_callback *cb)
void smc_lgr_cleanup_early(struct smc_connection *conn) void smc_lgr_cleanup_early(struct smc_connection *conn)
{ {
struct smc_link_group *lgr = conn->lgr; struct smc_link_group *lgr = conn->lgr;
struct list_head *lgr_list;
spinlock_t *lgr_lock; spinlock_t *lgr_lock;
if (!lgr) if (!lgr)
return; return;
smc_conn_free(conn); smc_conn_free(conn);
lgr_list = smc_lgr_list_head(lgr, &lgr_lock); smc_lgr_list_head(lgr, &lgr_lock);
spin_lock_bh(lgr_lock); spin_lock_bh(lgr_lock);
/* do not use this link group for new connections */ /* do not use this link group for new connections */
if (!list_empty(lgr_list)) if (!list_empty(&lgr->list))
list_del_init(lgr_list); list_del_init(&lgr->list);
spin_unlock_bh(lgr_lock); spin_unlock_bh(lgr_lock);
__smc_lgr_terminate(lgr, true); __smc_lgr_terminate(lgr, true);
} }

View file

@ -521,7 +521,7 @@ static int tls_do_encryption(struct sock *sk,
memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv, memcpy(&rec->iv_data[iv_offset], tls_ctx->tx.iv,
prot->iv_size + prot->salt_size); prot->iv_size + prot->salt_size);
xor_iv_with_seq(prot, rec->iv_data, tls_ctx->tx.rec_seq); xor_iv_with_seq(prot, rec->iv_data + iv_offset, tls_ctx->tx.rec_seq);
sge->offset += prot->prepend_size; sge->offset += prot->prepend_size;
sge->length -= prot->prepend_size; sge->length -= prot->prepend_size;
@ -1499,7 +1499,7 @@ static int decrypt_internal(struct sock *sk, struct sk_buff *skb,
else else
memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size); memcpy(iv + iv_offset, tls_ctx->rx.iv, prot->salt_size);
xor_iv_with_seq(prot, iv, tls_ctx->rx.rec_seq); xor_iv_with_seq(prot, iv + iv_offset, tls_ctx->rx.rec_seq);
/* Prepare AAD */ /* Prepare AAD */
tls_make_aad(aad, rxm->full_len - prot->overhead_size + tls_make_aad(aad, rxm->full_len - prot->overhead_size +

Some files were not shown because too many files have changed in this diff Show more