summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/mellanox/mlx5/core/en.h
Commit message (Collapse)AuthorAgeFilesLines
* net/mlx5e: Update MPWQE stride size when modifying CQE compress stateSaeed Mahameed2017-02-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | When the admin enables/disables cqe compression, updating mpwqe stride size is required: CQE compress ON ==> stride size = 256B CQE compress OFF ==> stride size = 64B This is already done on driver load via mlx5e_set_rq_type_params, all we need is just to call it on arbitrary admin changes of cqe compression state via priv flags or when changing timestamping state (as it is mutually exclusive with cqe compression). This bug introduces no functional damage, it only makes cqe compression occur less often, since in ConnectX4-LX CQE compression is performed only on packets smaller than stride size. Tested: ethtool --set-priv-flags ethxx rx_cqe_compress on pktgen with 64 < pkt size < 256 and netperf TCP_STREAM (IPv4/IPv6) verify `ethtool -S ethxx | grep compress` are advancing more often (rapidly) Fixes: 7219ab34f184 ("net/mlx5e: CQE compression") Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com> Cc: kernel-team@fb.com Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Bring back bfreg uar map dedicated pointerSaeed Mahameed2017-02-061-2/+3
| | | | | | | | | | | | | 4K Uar series modified the mlx5e driver to use the new bfreg API, and mistakenly removed the sq->uar_map iomem data path dedicated pointer, which was meant to be read from xmit path for cache locality utilization. Fix that by returning that pointer to the SQ struct. Fixes: 7309cb4ad71e ("IB/mlx5: Support 4k UAR for libmlx5") Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
* net/mlx5e: XDP Tx, no inline copy on ConnectX-5Saeed Mahameed2017-02-061-2/+1
| | | | | | | | | | | | | | | | | | | | | | ConnectX-5 and later HW generations will report min inline mode == MLX5_INLINE_MODE_NONE, which means driver is not required to copy packet headers to inline fields of TX WQE. Avoid copy to inline segment in XDP TX routine when HW inline mode doesn't require it. This will improve CPU utilization and boost XDP TX performance. Tested with xdp2 single flow: CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz HCA: Mellanox Technologies MT28800 Family [ConnectX-5 Ex] Before: 7.4Mpps After: 7.8Mpps Improvement: 5% Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Reviewed-by: Tariq Toukan <tariqt@mellanox.com>
* net/mlx5: Configure cache line size for start and end paddingDaniel Jurgens2017-02-061-2/+7
| | | | | | | | | | | | | | | There is a hardware feature that will pad the start or end of a DMA to be cache line aligned to avoid RMWs on the last cache line. The default cache line size setting for this feature is 64B. This change configures the hardware to use 128B alignment on systems with 128B cache lines. In addition we lower bound MPWRQ stride by HCA cacheline in mlx5e, MPWRQ stride should be at least the HCA cacheline, the current default is 64B and in case HCA_CAP.cach_line_128byte capability is set, MPWRQ RX stride will automatically be aligned to 128B. Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2017-02-021-3/+4
|\ | | | | | | | | | | All merge conflicts were simple overlapping changes. Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Fix update of hash function/key via ethtoolGal Pressman2017-01-291-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modifying TIR hash should change selected fields bitmask in addition to the function and key. Formerly, Only on ethool mlx5e_set_rxfh "ethtoo -X" we would not set this field resulting in zeroing of its value, which means no packet fields are used for RX RSS hash calculation thus causing all traffic to arrive in RQ[0]. On driver load out of the box we don't have this issue, since the TIR hash is fully created from scratch. Tested: ethtool -X ethX hkey <new key> ethtool -X ethX hfunc <new func> ethtool -X ethX equal <new indirection table> All cases are verified with TCP Multi-Stream traffic over IPv4 & IPv6. Fixes: bdfc028de1b3 ("net/mlx5e: Fix ethtool RX hash func configuration change") Signed-off-by: Gal Pressman <galp@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
| * net/mlx5: Change ENOTSUPP to EOPNOTSUPPOr Gerlitz2017-01-291-2/+2
| | | | | | | | | | | | | | | | | | | | As ENOTSUPP is specific to NFS, change the return error value to EOPNOTSUPP in various places in the mlx5 driver. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Suggested-by: Yotam Gigi <yotamg@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5e: CQE compression control code reuseShaker Daibes2017-01-241-1/+1
| | | | | | | | | | | | | | | | This patch is intended for code reuse of mlx5e_modify_rx_cqe_compression function. Signed-off-by: Shaker Daibes <shakerd@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5e: Reduce memory consumption on kdump kernelKamal Heib2017-01-241-6/+1
| | | | | | | | | | | | | | | | Reduce memory consumption on kdump kernel by decreasing the number of channels to 1 and the size of RQs and SQs to the minimal values. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5e: Receive s-tagged packets in promiscuous modeMohamad Haj Yahia2017-01-191-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | Today when the driver enter to promiscuous mode or vlan filter is disabled, we add flow rule to receive any c-taggd packets, therefore s-tagged packets are dropped. In order to receive s-tagged packets as well we need to add flow rule to receive any s-tagged packet. Fixes: 7cb21b794baa ('net/mlx5e: Rename en_flow_table.c to en_fs.c') Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5e: Implement 1PPS supportEugenia Emantayev2017-01-191-0/+3
| | | | | | | | | | | | | | | | | | | | This patch enables the 1PPS IN and 1PPS OUT support according to the advertised HCA capability. Single pin may be configured to one of the above mutual exclusive functions via standard Linux tools and APIs. For example, testptp open source application. Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5e: Support bpf_xdp_adjust_head()Martin KaFai Lau2017-01-181-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds bpf_xdp_adjust_head() support to mlx5e. 1. rx_headroom is added to struct mlx5e_rq. It uses an existing 4 byte hole in the struct. 2. The adjusted data length is checked against MLX5E_XDP_MIN_INLINE and MLX5E_SW2HW_MTU(rq->netdev->mtu). 3. The macro MLX5E_SW2HW_MTU is moved from en_main.c to en.h. MLX5E_HW2SW_MTU is also moved to en.h for symmetric reason but it is not a must. v2: - Keep the xdp specific logic in mlx5e_xdp_handle() - Update dma_len after the sanity checks in mlx5e_xmit_xdp_frame() Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | IB/mlx5: Support 4k UAR for libmlx5Eli Cohen2017-01-091-5/+4
| | | | | | | | | | | | | | | | | | | | | | Add fields to structs to convey to kernel an indication whether the library supports multi UARs per page and return to the library the size of a UAR based on the queried value. Signed-off-by: Eli Cohen <eli@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | IB/mlx5: Use blue flame register allocator in mlx5_ibEli Cohen2017-01-091-1/+1
|/ | | | | | | | | | | | | | Make use of the blue flame registers allocator at mlx5_ib. Since blue flame was not really supported we remove all the code that is related to blue flame and we let all consumers to use the same blue flame register. Once blue flame is supported we will add the code. As part of this patch we also move the definition of struct mlx5_bf to mlx5_ib.h as it is only used by mlx5_ib. Signed-off-by: Eli Cohen <eli@mellanox.com> Reviewed-by: Matan Barak <matanb@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2016-12-061-2/+2
|\
| * net/mlx5e: Change the SQ/RQ operational state to positive logicMohamad Haj Yahia2016-12-061-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When using the negative logic (i.e. FLUSH state), after the RQ/SQ reopen we will have a time interval that the RQ/SQ is not really ready and the state indicates that its not in FLUSH state because the initial SQ/RQ struct memory starts as zeros. Now we changed the state to indicate if the SQ/RQ is opened and we will set the READY state after finishing preparing all the SQ/RQ resources. Fixes: 6e8dd6d6f4bd ("net/mlx5e: Don't wait for SQ completions on close") Fixes: f2fde18c52a7 ("net/mlx5e: Don't wait for RQ completions on close") Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Create UMR MKey per RQTariq Toukan2016-12-021-7/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Striding RQ implementation, we used a single UMR (User-Mode Memory Registration) memory key for all RQs. When the product of RQs number*size gets high, we hit a limitation of u16 field size in FW. Here we move to using a UMR memory key per RQ, so we can scale to any number of rings, with the maximum buffer size in each. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Implement Fragmented Work Queue (WQ)Tariq Toukan2016-12-021-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add new type of struct mlx5_frag_buf which is used to allocate fragmented buffers rather than contiguous, and make the Completion Queues (CQs) use it as they are big (default of 2MB per CQ in Striding RQ). This fixes the failures of type: "mlx5e_open_locked: mlx5e_open_channels failed, -12" due to dma_zalloc_coherent insufficient contiguous coherent memory to satisfy the driver's request when the user tries to setup more or larger rings. Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Reported-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Add CQE compression user controlShaker Daibes2016-11-281-2/+3
| | | | | | | | | | | | | | | | | | | | The user can now override the automatic driver decision using the rx_cqe_compress flag, which is the preference for CQE compression. The flag is initialized with the automatic driver decision. Signed-off-by: Shaker Daibes <shakerd@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Moves pflags to priv->paramsShaker Daibes2016-11-281-7/+9
| | | | | | | | | | | | | | | | | | | | pflags is a configuration parameter for the netdev, naturally it belongs to priv->params. Also introduce MLX5E_GET_PFLAG Signed-off-by: Shaker Daibes <shakerd@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Add support for loopback selftestSaeed Mahameed2016-11-281-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | Extend the self diagnostic tests to support loopback test. The loopback test doesn't require the offline flag, it will use the generic dev_queue_xmit and a dedicated packet_type to capture and verify mlx5e selftest loopback packets. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Kamal Heib <kamalh@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Add support for ethtool self diagnostics testKamal Heib2016-11-281-0/+5
| | | | | | | | | | | | | | | | | | | | | | The self diagnostics test implementaion include the following features: 1. Link Test: Check that link is in up state. 2. Speed Test: Check that link was negotiated correctly. 3. Health Test: Check the device health. Signed-off-by: Kamal Heib <kamalh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: ConnectX-4 firmware support for DCBXHuy Nguyen2016-11-281-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | DBCX by default is controlled by firmware where dcbx capability bit is set. In this mode, firmware is responsible for reading/sending the TLV packets from/to the remote partner. This patch sets up the infrastructure to move between HOST/FW DCBX control mode. Signed-off-by: Huy Nguyen <huyn@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Read ETS settings directly from firmwareHuy Nguyen2016-11-281-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Issue description: Current implementation saves the ETS settings from user in a temporal soft copy and returns this settings when user queries the ETS settings. With the new DCBX firmware, the ETS settings can be changed by firmware when the DCBX is in firmware controlled mode. Therefore, user will obtain wrong values from the temporal soft copy. Solution: 1. Read the ETS settings directly from firmware. 2. For tc_tsa: a. Initialize tc_tsa to vendor IEEE_8021QAZ_TSA_VENDOR at netdev creation. b. When reading ETS setting from FW, if the traffic class bandwidth is less than 100, set tc_tsa to IEEE_8021QAZ_TSA_ETS. This implementation solves the scenarios when the DCBX is in FW control and willing bit is on which means the ETS setting is dictated by remote switch. Also check ETS capability where needed. Signed-off-by: Huy Nguyen <huyn@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Support DCBX CEE APIHuy Nguyen2016-11-281-0/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add DCBX CEE API interface for ConnectX-4. Configurations are stored in a temporary structure and are applied to the card's firmware when the CEE's setall callback function is called. Note: priority group in CEE is equivalent to traffic class in ConnectX-4 hardware spec. bw allocation per priority in CEE is not supported because ConnectX-4 only supports bw allocation per traffic class. user priority in CEE does not have an equivalent term in ConnectX-4. Therefore, user priority to priority mapping in CEE is not supported. Signed-off-by: Huy Nguyen <huyn@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5: Enable to query min inline for a specific vportRoi Dayan2016-11-241-6/+0
| | | | | | | | | | | | | | | | | | Also move the inline capablities enum to a shared header vport.h Signed-off-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Support HW (offloaded) and SW counters for SRIOV switchdev modeOr Gerlitz2016-11-241-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Switchdev driver net-device port statistics should follow the model introduced in commit a5ea31f57309 'Merge branch net-offloaded-stats'. For VF reps we return the SRIOV eswitch vport stats as the usual ones and SW stats if asked. For the PF, if we're in the switchdev mode, we return the uplink stats and SW stats if asked, otherwise as before. The uplink stats are implemented using the PPCNT 802_3 counters which are already being read/cached by the driver. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5e: Add ndo_udp_tunnel_add to VF representorsHadar Hen Zion2016-11-091-0/+4
| | | | | | | | | | | | | | | | | | | | | | By implementing this ndo, the host stack will set the vxlan udp port also to VF representor netdevices. This will allow the TC offload code in the driver when it gets a tunnel key set action to identify the UDP port as vxlan, and hence the rule will be a candidate for offloading. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | Merge tag 'shared-for-4.10-1' of ↵David S. Miller2016-10-301-7/+7
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma Saeed Mahameed says: ==================== Mellanox mlx5 core driver updates 2016-10-25 This series contains some updates and fixes of mlx5 core and IB drivers with the addition of two features that demand new low level commands and infrastructure updates. - SRIOV VF max rate limit support - mlx5e tc support for FWD rules with counter. Needed for both net and rdma subsystems. Updates and Fixes: From Saeed Mahameed (2): - mlx5 IB: Skip handling unknown mlx5 events - Add ConnectX-5 PCIe 4.0 VF device ID From Artemy Kovalyov (2): - Update struct mlx5_ifc_xrqc_bits - Ensure SRQ physical address structure endianness From Eugenia Emantayev (1): - Fix length of async_event_mask New Features: From Mohamad Haj Yahia (3): mlx5 SRIOV VF max rate limit support - Introduce TSAR manipulation firmware commands - Introduce E-switch QoS management - Add SRIOV VF max rate configuration support From Mark Bloch (7): mlx5e Tc support for FWD rule with counter - Don't unlock fte while still using it - Use fte status to decide on firmware command - Refactor find_flow_rule - Group similar rules under the same fte - Add multi dest support - Add option to add fwd rule with counter - mlx5e tc support for FWD rule with counter Mark here fixed two trivial issues with the flow steering core, and did some refactoring in the flow steering API to support adding mulit destination rules to the same hardware flow table entry at once. In the last two patches added the ability to populate a flow rule with a flow counter to the same flow entry. V2: Dropped some patches that added new structures without adding any usage of them. Added SRIOV VF max rate configuration support patch that introduces the usage of the TSAR infrastructure. Added flow steering fixes and refactoring in addition to mlx5 tc support for forward rule with counter. ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5: Add multi dest supportMark Bloch2016-10-301-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently when calling mlx5_add_flow_rule we accept only one flow destination, this commit allows to pass multiple destinations. This change forces us to change the return structure to a more flexible one. We introduce a flow handle (struct mlx5_flow_handle), it holds internally the number for rules created and holds an array where each cell points the to a flow rule. From the consumers (of mlx5_add_flow_rule) point of view this change is only cosmetic and requires only to change the type of the returned value they store. From the core point of view, we now need to use a loop when allocating and deleting rules (e.g given to us a flow handler). Signed-off-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* | net/mlx5e: Choose best nearest LRO timeoutSaeed Mahameed2016-10-291-0/+5
|/ | | | | | | | | | Instead of predicting the index of the wanted LRO timeout value from hardware capabilities, look for the nearest LRO timeout value. Fixes: 5c50368f3831 ('net/mlx5e: Light-weight netdev open/stop') Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: E-Switch, Support VLAN actions in the offloads modeOr Gerlitz2016-09-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many virtualization systems use a policy under which a vlan tag is pushed to packets sent by guests, and popped before the packet is forwarded to the VM. The current generation of the mlx5 HW doesn't fully support that on a per flow level. As such, we are addressing the above common use case with the SRIOV e-Switch abilities to push vlan into packets sent by VFs and pop vlan from packets forwarded to VFs. The HW can match on the correct vlan being present in packets forwarded to VFs (eSwitch steering is done before stripping the tag), so this part is offloaded as is. A common practice for vlans is to avoid both push vlan and pop vlan for inter-host VM/VM (east-west) communication because in this case, push on egress cancels out with pop on ingress. For supporting that, we use a global eswitch vlan pop policy, hence allowing guest A to communicate with both remote VM B and local VM C. This works since the HW pops the vlan only if it exists (e.g for C --> A packets but not for B --> A packets). On the slow path, when a VF vport has an offloaded flow which involves pushing vlans, wheres another flow is not currently offloaded, the packets from the 2nd flow seen by the VF representor on the host have vlan. The VF rep driver removes such vlan before calling into the host networking stack. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: XDP TX xmit moreSaeed Mahameed2016-09-221-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Previously we rang XDP SQ doorbell on every forwarded XDP packet. Here we introduce a xmit more like mechanism that will queue up more than one packet into SQ (up to RX napi budget) w/o notifying the hardware. Once RX napi budget is consumed and we exit napi RX loop, we will flush (doorbell) all XDP looped packets in case there are such. XDP forward packet rate: Comparing XDP with and w/o xmit more (bulk transmit): RX Cores XDP TX XDP TX (xmit more) --------------------------------------------------- 1 6.5Mpps 12.4Mpps 2 13.2Mpps 24.2Mpps 4 25.2Mpps 36.3Mpps* 8 36.3Mpps* 36.3Mpps* *My xmitter was limited to 36.3Mpps, so it is the bottleneck. It seems that receive side can handle more. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: XDP TX forwarding supportSaeed Mahameed2016-09-221-4/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Adding support for XDP_TX forwarding from xdp program. Using XDP, now user can loop packets out of the same port. We create a dedicated TX SQ for each channel that will serve XDP programs that return XDP_TX action to loop packets back to the wire directly from the channel RQ RX path. For that RX pages will now need to be mapped bi-directionally, and on XDP_TX action we will sync the page back to device then queue it into SQ for transmission. The XDP xmit frame function will report back to the RX path if the page was consumed (transmitted), if so, RX path will forget about that page as if it were released to the stack. Later on, on XDP TX completion, the page will be released back to the page cache. For simplicity this patch will hit a doorbell on every XDP TX packet. Next patch will introduce a xmit more like mechanism that will queue up more than one packet into SQ w/o notifying the hardware, once RX napi loop is done we will hit doorbell once for all XDP TX packets form the previous loop. This should drastically improve XDP TX performance. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Have a clear separation between different SQ typesSaeed Mahameed2016-09-221-5/+18
| | | | | | | | | | | | | | | Make a clear separate between Regular SQ (TXQ) and ICO SQ creation, destruction and union their mutual information structures. Don't allocate redundant TXQ skb/wqe_info/dma_fifo arrays for ICO SQ. And have a different SQ edge for ICO SQ than TXQ SQ, to be more accurate. In preparation for XDP TX support. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: XDP fast RX drop bpf programs supportRana Shahout2016-09-221-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the BPF_PROG_TYPE_PHYS_DEV hook in mlx5e driver. When XDP is on we make sure to change channels RQs type to MLX5_WQ_TYPE_LINKED_LIST rather than "striding RQ" type to ensure "page per packet". On XDP set, we fail if HW LRO is set and request from user to turn it off. Since on ConnectX4-LX HW LRO is always on by default, this will be annoying, but we prefer not to enforce LRO off from XDP set function. Full channels reset (close/open) is required only when setting XDP on/off. When XDP set is called just to exchange programs, we will update each RQ xdp program on the fly and for synchronization with current data path RX activity of that RQ, we temporally disable that RQ and ensure RX path is not running, quickly update and re-enable that RQ, for that we do: - rq.state = disabled - napi_synnchronize - xchg(rq->xdp_prg) - rq.state = enabled - napi_schedule // Just in case we've missed an IRQ Packet rate performance testing was done with pktgen 64B packets and on TX side and, TC drop action on RX side compared to XDP fast drop. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz Comparison is done between: 1. Baseline, Before this patch with TC drop action 2. This patch with TC drop action 3. This patch with XDP RX fast drop RX Cores Baseline(TC drop) TC drop XDP fast Drop -------------------------------------------------------------- 1 5.3Mpps 5.3Mpps 16.5Mpps 2 10.2Mpps 10.2Mpps 31.3Mpps 4 20.5Mpps 19.9Mpps 36.3Mpps* *My xmitter was limited to 36.3Mpps, so it is the bottleneck. It seems that receive side can handle more. Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Union RQ RX info per RQ typeSaeed Mahameed2016-09-221-5/+9
| | | | | | | | | | | | | | We have two types of RX RQs, and they use two separate sets of info arrays and structures in RX data path function. Today those structures are mutually exclusive per RQ type, hence one kind is allocated on RQ creation according to the RQ type. For better cache locality and to minimalize the sizeof(struct mlx5e_rq), in this patch we define them as a union. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Build RX SKB on demandSaeed Mahameed2016-09-221-2/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For non-striding RQ configuration before this patch we had a ring with pre-allocated SKBs and mapped the SKB->data buffers for device. For robustness and better RX data buffers management, we allocate a page per packet and build_skb around it. This patch (which is a prerequisite for XDP) will actually reduce performance for normal stack usage, because we are now hitting a bottleneck in the page allocator. We use the page-cache to restore or even improve performance in comparison to the old RX scheme. Packet rate performance testing was done with pktgen 64B packets on xmit side and TC ingress dropping action on RX side. CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz Comparison is done between: 1.Baseline, before 'net/mlx5e: Build RX SKB on demand' 2.Build SKB with RX page cache (This patch) RX Cores Baseline Build SKB+page-cache Improvement ----------------------------------------------------------- 1 4.16Mpps 5.33Mpps 28% 2 7.16Mpps 10.24Mpps 43% 4 13.61Mpps 20.51Mpps 51% 8 25.32Mpps 32.00Mpps 26% All respective cores were 100% utilized. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Implement RX mapped page cache for page recycleTariq Toukan2016-09-171-0/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of reallocating and mapping pages for RX data-path, recycle already used pages in a per ring cache. Performance tests: The following results were measured on a freshly booted system, giving optimal baseline performance, as high-order pages are yet to be fragmented and depleted. We ran pktgen single-stream benchmarks, with iptables-raw-drop: Single stride, 64 bytes: * 4,739,057 - baseline * 4,749,550 - order0 no cache * 4,786,899 - order0 with cache 1% gain Larger packets, no page cross, 1024 bytes: * 3,982,361 - baseline * 3,845,682 - order0 no cache * 4,127,852 - order0 with cache 3.7% gain Larger packets, every 3rd packet crosses a page, 1500 bytes: * 3,731,189 - baseline * 3,579,414 - order0 no cache * 3,931,708 - order0 with cache 5.4% gain Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Single flow order-0 pages for Striding RQTariq Toukan2016-09-171-39/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To improve the memory consumption scheme, we omit the flow that demands and splits high-order pages in Striding RQ, and stay with a single Striding RQ flow that uses order-0 pages. Moving to fragmented memory allows the use of larger MPWQEs, which reduces the number of UMR posts and filler CQEs. Moving to a single flow allows several optimizations that improve performance, especially in production servers where we would anyway fallback to order-0 allocations: - inline functions that were called via function pointers. - improve the UMR post process. This patch alone is expected to give a slight performance reduction. However, the new memory scheme gives the possibility to use a page-cache of a fair size, that doesn't inflate the memory footprint, which will dramatically fix the reduction and even give a performance gain. Performance tests: The following results were measured on a freshly booted system, giving optimal baseline performance, as high-order pages are yet to be fragmented and depleted. We ran pktgen single-stream benchmarks, with iptables-raw-drop: Single stride, 64 bytes: * 4,739,057 - baseline * 4,749,550 - this patch no reduction Larger packets, no page cross, 1024 bytes: * 3,982,361 - baseline * 3,845,682 - this patch 3.5% reduction Larger packets, every 3rd packet crosses a page, 1500 bytes: * 3,731,189 - baseline * 3,579,414 - this patch 4% reduction Fixes: 461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)") Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") Signed-off-by: Tariq Toukan <tariqt@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Implement mlx5e interface attach/detach callbacksMohamad Haj Yahia2016-09-101-2/+5
| | | | | | | | | | | | | Needed to support seamless and lightweight PCI/Internal error recovery. Implement the attach/detach interface callbacks. In attach callback we only allocate HW resources. In detach callback we only deallocate HW resources. All SW/kernel objects initialzing/destroying is kept in add/remove callbacks. Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2016-08-301-12/+9
|\ | | | | | | | | | | | | All three conflicts were cases of simple overlapping changes. Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Don't wait for SQ completions on closeSaeed Mahameed2016-08-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of asking the firmware to flush the SQ (Send Queue) via asynchronous completions when moved to error, we handle SQ flush manually (mlx5e_free_tx_descs) same as we did when SQ flush got timed out or on tx_timeout. This will reduce SQs flush time and speedup interface down procedure. Moved mlx5e_free_tx_descs to the end of en_tx.c for tx critical code locality. Fixes: 29429f3300a3 ('net/mlx5e: Timeout if SQ doesn't flush during close') Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Don't wait for RQ completions on closeSaeed Mahameed2016-08-281-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will significantly reduce receive queue flush time on interface down. Instead of asking the firmware to flush the RQ (Receive Queue) via asynchronous completions when moved to error, we handle RQ flush manually (mlx5e_free_rx_descs) same as we did when RQ flush got timed out. This will reduce RQs flush time and speedup interface down procedure (ifconfig down) from 6 sec to 0.3 sec on a 48 cores system. Moved mlx5e_free_rx_descs en_main.c where it is needed, to keep en_rx.c free form non critical data path code for better code locality. Fixes: 6cd392a082de ('net/mlx5e: Handle RQ flush in error cases') Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Limit UMR length to the device's limitationSaeed Mahameed2016-08-281-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ConnectX-4 UMR (User Memory Region) MTT translation table offset in WQE is limited to U16_MAX, before this patch we ignored that limitation and requested the maximum possible UMR translation length that the netdev might need (MAX channels * MAX pages per channel). In case of a system with #cores > 32 and when linear WQE allocation fails, falling back to using UMR WQEs will cause the RQ (Receive Queue) to get stuck. Here we limit UMR length to min(U16_MAX, max required pages) (while considering the required alignments) on driver load, by default U16_MAX is sufficient since the default RX rings value guarantees that we are in range, dynamically (on set_ringparam/set_channels) we will check if the new required UMR length (num mtts) is still in range, if not, fail the request. Fixes: bc77b240b3c5 ('net/mlx5e: Add fragmented memory support for RX multi packet WQE') Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | net/mlx5: Expose mlx5e_link_modeNoa Osherovich2016-08-181-34/+0
|/ | | | | | | | | | The mlx5e_link_mode enumeration will also be used in mlx5_ib for RoCE. This patch moves the enumeration to the mlx5 driver port header file. Signed-off-by: Noa Osherovich <noaos@mellanox.com> Signed-off-by: Eran Ben Elisha <eranbe@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
* net/mlx5e: Query minimum required header copy during xmitHadar Hen Zion2016-07-251-0/+7
| | | | | | | | | | | | | | | | | Add support for query the minimum inline mode from the Firmware. It is required for correct TX steering according to L3/L4 packet headers. Each send queue (SQ) has inline mode that defines the minimal required headers that needs to be copied into the SQ WQE. The driver asks the Firmware for the wqe_inline_mode device capability value. In case the device capability defined as "vport context" the driver must check the reported min inline mode from the vport context before creating its SQs. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5e: Check the minimum inline header mode before xmitHadar Hen Zion2016-07-251-0/+1
| | | | | | | | | | | Each send queue (SQ) has inline mode that defines the minimal required inline headers in the SQ WQE. Before sending each packet check that the minimum required headers on the WQE are copied. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2016-07-061-1/+10
|\ | | | | | | | | | | | | | | | | | | | | Conflicts: drivers/net/ethernet/mellanox/mlx5/core/en.h drivers/net/ethernet/mellanox/mlx5/core/en_main.c drivers/net/usb/r8152.c All three conflicts were overlapping changes. Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Validate BW weight values of ETSRana Shahout2016-07-011-1/+0
| | | | | | | | | | | | | | | | | | Valid weight assigned to ETS TClass values are 1-100 Fixes: 08fb1dacdd76 ('net/mlx5e: Support DCBNL IEEE ETS') Signed-off-by: Rana Shahout <ranas@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>