Architecture

This document describes theDistributed Switch Architecture (DSA) subsystemdesign principles, limitations, interactions with other subsystems, and how todevelop drivers for this subsystem as well as a TODO for developers interestedin joining the effort.

Design principles

The Distributed Switch Architecture subsystem was primarily designed tosupport Marvell Ethernet switches (MV88E6xxx, a.k.a. Link Street productline) using Linux, but has since evolved to support other vendors as well.

The original philosophy behind this design was to be able to use unmodifiedLinux tools such as bridge, iproute2, ifconfig to work transparently whetherthey configured/queried a switch port network device or a regular networkdevice.

An Ethernet switch typically comprises multiple front-panel ports and oneor more CPU or management ports. The DSA subsystem currently relies on thepresence of a management port connected to an Ethernet controller capable ofreceiving Ethernet frames from the switch. This is a very common setup for allkinds of Ethernet switches found in Small Home and Office products: routers,gateways, or even top-of-rack switches. This host Ethernet controller willbe later referred to as “conduit” and “cpu” in DSA terminology and code.

The D in DSA stands for Distributed, because the subsystem has been designedwith the ability to configure and manage cascaded switches on top of each otherusing upstream and downstream Ethernet links between switches. These specificports are referred to as “dsa” ports in DSA terminology and code. A collectionof multiple switches connected to each other is called a “switch tree”.

For each front-panel port, DSA creates specialized network devices which areused as controlling and data-flowing endpoints for use by the Linux networkingstack. These specialized network interfaces are referred to as “user” networkinterfaces in DSA terminology and code.

The ideal case for using DSA is when an Ethernet switch supports a “switch tag”which is a hardware feature making the switch insert a specific tag for eachEthernet frame it receives to/from specific ports to help the managementinterface figure out:

  • what port is this frame coming from

  • what was the reason why this frame got forwarded

  • how to send CPU originated traffic to specific ports

The subsystem does support switches not capable of inserting/stripping tags, butthe features might be slightly limited in that case (traffic separation relieson Port-based VLAN IDs).

Note that DSA does not currently create network interfaces for the “cpu” and“dsa” ports because:

  • the “cpu” port is the Ethernet switch facing side of the managementcontroller, and as such, would create a duplication of feature, since youwould get two interfaces for the same conduit: conduit netdev, and “cpu” netdev

  • the “dsa” port(s) are just conduits between two or more switches, and as suchcannot really be used as proper network interfaces either, only thedownstream, or the top-most upstream interface makes sense with that model

NB: for the past 15 years, the DSA subsystem had been making use of the terms“master” (rather than “conduit”) and “slave” (rather than “user”). These termshave been removed from the DSA codebase and phased out of the uAPI.

Switch tagging protocols

DSA supports many vendor-specific tagging protocols, one software-definedtagging protocol, and a tag-less mode as well (DSA_TAG_PROTO_NONE).

The exact format of the tag protocol is vendor specific, but in general, theyall contain something which:

  • identifies which port the Ethernet frame came from/should be sent to

  • provides a reason why this frame was forwarded to the management interface

All tagging protocols are innet/dsa/tag_*.c files and implement themethods of thestructdsa_device_ops structure, which are detailed below.

Tagging protocols generally fall in one of three categories:

  1. The switch-specific frame header is located before the Ethernet header,shifting to the right (from the perspective of the DSA conduit’s frameparser) the MAC DA, MAC SA, EtherType and the entire L2 payload.

  2. The switch-specific frame header is located before the EtherType, keepingthe MAC DA and MAC SA in place from the DSA conduit’s perspective, butshifting the ‘real’ EtherType and L2 payload to the right.

  3. The switch-specific frame header is located at the tail of the packet,keeping all frame headers in place and not altering the view of the packetthat the DSA conduit’s frame parser has.

A tagging protocol may tag all packets with switch tags of the same length, orthe tag length might vary (for example packets with PTP timestamps mightrequire an extended switch tag, or there might be one tag length on TX and adifferent one on RX). Either way, the tagging protocol driver must populate thestructdsa_device_ops::needed_headroom and/orstructdsa_device_ops::needed_tailroomwith the length in octets of the longest switch frame header/trailer. The DSAframework will automatically adjust the MTU of the conduit interface toaccommodate for this extra size in order for DSA user ports to support thestandard MTU (L2 payload length) of 1500 octets. Theneeded_headroom andneeded_tailroom properties are also used to request from the network stack,on a best-effort basis, the allocation of packets with enough extra space suchthat the act of pushing the switch tag on transmission of a packet does notcause it to reallocate due to lack of memory.

Even though applications are not expected to parse DSA-specific frame headers,the format on the wire of the tagging protocol represents an Application BinaryInterface exposed by the kernel towards user space, for decoders such aslibpcap. The tagging protocol driver must populate theproto member ofstructdsa_device_ops with a value that uniquely describes thecharacteristics of the interaction required between the switch hardware and thedata path driver: the offset of each bit field within the frame header and anystateful processing required to deal with the frames (as may be required forPTP timestamping).

From the perspective of the network stack, all switches within the same DSAswitch tree use the same tagging protocol. In case of a packet transiting afabric with more than one switch, the switch-specific frame header is insertedby the first switch in the fabric that the packet was received on. This headertypically contains information regarding its type (whether it is a controlframe that must be trapped to the CPU, or a data frame to be forwarded).Control frames should be decapsulated only by the software data path, whereasdata frames might also be autonomously forwarded towards other user ports ofother switches from the same fabric, and in this case, the outermost switchports must decapsulate the packet.

Note that in certain cases, it might be the case that the tagging format usedby a leaf switch (not connected directly to the CPU) is not the same as whatthe network stack sees. This can be seen with Marvell switch trees, where theCPU port can be configured to use either the DSA or the Ethertype DSA (EDSA)format, but the DSA links are configured to use the shorter (without Ethertype)DSA frame header, in order to reduce the autonomous packet forwarding overhead.It still remains the case that, if the DSA switch tree is configured for theEDSA tagging protocol, the operating system sees EDSA-tagged packets from theleaf switches that tagged them with the shorter DSA header. This can be donebecause the Marvell switch connected directly to the CPU is configured toperform tag translation between DSA and EDSA (which is simply the operation ofadding or removing theETH_P_EDSA EtherType and some padding octets).

It is possible to construct cascaded setups of DSA switches even if theirtagging protocols are not compatible with one another. In this case, there areno DSA links in this fabric, and each switch constitutes a disjoint DSA switchtree. The DSA links are viewed as simply a pair of a DSA conduit (the out-facingport of the upstream DSA switch) and a CPU port (the in-facing port of thedownstream DSA switch).

The tagging protocol of the attached DSA switch tree can be viewed through thedsa/tagging sysfs attribute of the DSA conduit:

cat /sys/class/net/eth0/dsa/tagging

If the hardware and driver are capable, the tagging protocol of the DSA switchtree can be changed at runtime. This is done by writing the new taggingprotocol name to the same sysfs device attribute as above (the DSA conduit andall attached switch ports must be down while doing this).

It is desirable that all tagging protocols are testable with thedsa_loopmockup driver, which can be attached to any network interface. The goal is thatany network interface should be capable of transmitting the same packet in thesame way, and the tagger should decode the same received packet in the same wayregardless of the driver used for the switch control path, and the driver usedfor the DSA conduit.

The transmission of a packet goes through the tagger’sxmit function.The passedstructsk_buff*skb hasskb->data pointing atskb_mac_header(skb), i.e. at the destination MAC address, and the passedstructnet_device*dev represents the virtual DSA user network interfacewhose hardware counterpart the packet must be steered to (i.e.swp0).The job of this method is to prepare the skb in a way that the switch willunderstand what egress port the packet is for (and not deliver it towards otherports). Typically this is fulfilled by pushing a frame header. Checking forinsufficient size in the skb headroom or tailroom is unnecessary provided thattheneeded_headroom andneeded_tailroom properties were filled outproperly, because DSA ensures there is enough space before calling this method.

The reception of a packet goes through the tagger’srcv function. Thepassedstructsk_buff*skb hasskb->data pointing atskb_mac_header(skb)+ETH_ALEN octets, i.e. to where the first octet afterthe EtherType would have been, were this frame not tagged. The role of thismethod is to consume the frame header, adjustskb->data to really point atthe first octet after the EtherType, and to changeskb->dev to point to thevirtual DSA user network interface corresponding to the physical front-facingswitch port that the packet was received on.

Since tagging protocols in category 1 and 2 break software (and most often alsohardware) packet dissection on the DSA conduit, features such as RPS (ReceivePacket Steering) on the DSA conduit would be broken. The DSA framework dealswith this by hooking into the flow dissector and shifting the offset at whichthe IP header is to be found in the tagged frame as seen by the DSA conduit.This behavior is automatic based on theoverhead value of the taggingprotocol. If not all packets are of equal size, the tagger can implement theflow_dissect method of thestructdsa_device_ops and override thisdefault behavior by specifying the correct offset incurred by each individualRX packet. Tail taggers do not cause issues to the flow dissector.

Checksum offload should work with category 1 and 2 taggers when the DSA conduitdriver declares NETIF_F_HW_CSUM in vlan_features and looks at csum_start andcsum_offset. For those cases, DSA will shift the checksum start and offset bythe tag size. If the DSA conduit driver still uses the legacy NETIF_F_IP_CSUMor NETIF_F_IPV6_CSUM in vlan_features, the offload might only work if theoffload hardware already expects that specific tag (perhaps due to matchingvendors). DSA user ports inherit those flags from the conduit, and it is up tothe driver to correctly fall back to software checksum when the IP header is notwhere the hardware expects. If that check is ineffective, the packets might goto the network without a proper checksum (the checksum field will have thepseudo IP header sum). For category 3, when the offload hardware does notalready expect the switch tag in use, the checksum must be calculated before anytag is inserted (i.e. inside the tagger). Otherwise, the DSA conduit wouldinclude the tail tag in the (software or hardware) checksum calculation. Then,when the tag gets stripped by the switch during transmission, it will leave anincorrect IP checksum in place.

Due to various reasons (most common being category 1 taggers being associatedwith DSA-unaware conduits, mangling what the conduit perceives as MAC DA), thetagging protocol may require the DSA conduit to operate in promiscuous mode, toreceive all frames regardless of the value of the MAC DA. This can be done bysetting thepromisc_on_conduit property of thestructdsa_device_ops.Note that this assumes a DSA-unaware conduit driver, which is the norm.

Conduit network devices

Conduit network devices are regular, unmodified Linux network device drivers forthe CPU/management Ethernet interface. Such a driver might occasionally need toknow whether DSA is enabled (e.g.: to enable/disable specific offload features),but the DSA subsystem has been proven to work with industry standard drivers:e1000e,mv643xx_eth etc. without having to introduce modifications to thesedrivers. Such network devices are also often referred to as conduit networkdevices since they act as a pipe between the host processor and the hardwareEthernet switch.

Networking stack hooks

When a conduit netdev is used with DSA, a small hook is placed in thenetworking stack is in order to have the DSA subsystem process the Ethernetswitch specific tagging protocol. DSA accomplishes this by registering aspecific (and fake) Ethernet type (later becomingskb->protocol) with thenetworking stack, this is also known as aptype orpacket_type. A typicalEthernet Frame receive sequence looks like this:

Conduit network device (e.g.: e1000e):

  1. Receive interrupt fires:

    • receive function is invoked

    • basic packet processing is done: getting length, status etc.

    • packet is prepared to be processed by the Ethernet layer by callingeth_type_trans

  2. net/ethernet/eth.c:

    eth_type_trans(skb, dev)        if (dev->dsa_ptr != NULL)                -> skb->protocol = ETH_P_XDSA
  3. drivers/net/ethernet/*:

    netif_receive_skb(skb)        -> iterate over registered packet_type                -> invoke handler for ETH_P_XDSA, calls dsa_switch_rcv()
  4. net/dsa/dsa.c:

    -> dsa_switch_rcv()        -> invoke switch tag specific protocol handler in 'net/dsa/tag_*.c'
  5. net/dsa/tag_*.c:

    • inspect and strip switch tag protocol to determine originating port

    • locate per-port network device

    • invokeeth_type_trans() with the DSA user network device

    • invokednetif_receive_skb()

Past this point, the DSA user network devices get delivered regular Ethernetframes that can be processed by the networking stack.

User network devices

User network devices created by DSA are stacked on top of their conduit networkdevice, each of these network interfaces will be responsible for being acontrolling and data-flowing end-point for each front-panel port of the switch.These interfaces are specialized in order to:

  • insert/remove the switch tag protocol (if it exists) when sending trafficto/from specific switch ports

  • query the switch for ethtool operations: statistics, link state,Wake-on-LAN, register dumps...

  • manage external/internal PHY: link, auto-negotiation, etc.

These user network devices have custom net_device_ops and ethtool_ops functionpointers which allow DSA to introduce a level of layering between the networkingstack/ethtool and the switch driver implementation.

Upon frame transmission from these user network devices, DSA will look up whichswitch tagging protocol is currently registered with these network devices andinvoke a specific transmit routine which takes care of adding the relevantswitch tag in the Ethernet frames.

These frames are then queued for transmission using the conduit network devicendo_start_xmit() function. Since they contain the appropriate switch tag, theEthernet switch will be able to process these incoming frames from themanagement interface and deliver them to the physical switch port.

When using multiple CPU ports, it is possible to stack a LAG (bonding/team)device between the DSA user devices and the physical DSA conduits. The LAGdevice is thus also a DSA conduit, but the LAG slave devices continue to be DSAconduits as well (just with no user port assigned to them; this is needed forrecovery in case the LAG DSA conduit disappears). Thus, the data path of the LAGDSA conduit is used asymmetrically. On RX, theETH_P_XDSA handler, whichcallsdsa_switch_rcv(), is invoked early (on the physical DSA conduit;LAG slave). Therefore, the RX data path of the LAG DSA conduit is not used.On the other hand, TX takes place linearly:dsa_user_xmit callsdsa_enqueue_skb, which callsdev_queue_xmit towards the LAG DSA conduit.The latter callsdev_queue_xmit towards one physical DSA conduit or theother, and in both cases, the packet exits the system through a hardware pathtowards the switch.

Graphical representation

Summarized, this is basically how DSA looks like from a network deviceperspective:

             Unaware application           opens and binds socket                    |  ^                    |  |        +-----------v--|--------------------+        |+------+ +------+ +------+ +------+|        || swp0 | | swp1 | | swp2 | | swp3 ||        |+------+-+------+-+------+-+------+|        |          DSA switch driver        |        +-----------------------------------+                      |        ^         Tag added by |        | Tag consumed by        switch driver |        | switch driver                      v        |        +-----------------------------------+        | Unmodified host interface driver  | Software--------+-----------------------------------+------------        |       Host interface (eth0)       | Hardware        +-----------------------------------+                      |        ^      Tag consumed by |        | Tag added by      switch hardware |        | switch hardware                      v        |        +-----------------------------------+        |               Switch              |        |+------+ +------+ +------+ +------+|        || swp0 | | swp1 | | swp2 | | swp3 ||        ++------+-+------+-+------+-+------++

User MDIO bus

In order to be able to read to/from a switch PHY built into it, DSA creates anuser MDIO bus which allows a specific switch driver to divert and interceptMDIO reads/writes towards specific PHY addresses. In most MDIO-connectedswitches, these functions would utilize direct or indirect PHY addressing modeto return standard MII registers from the switch builtin PHYs, allowing the PHYlibrary and/or to return link status, link partner pages, auto-negotiationresults, etc.

For Ethernet switches which have both external and internal MDIO buses, theuser MII bus can be utilized to mux/demux MDIO reads and writes towards eitherinternal or external MDIO devices this switch might be connected to: internalPHYs, external PHYs, or even external switches.

Data structures

DSA data structures are defined ininclude/net/dsa.h as well asnet/dsa/dsa_priv.h:

  • dsa_chip_data: platform data configuration for a given switch device,this structure describes a switch device’s parent device, its address, aswell as various properties of its ports: names/labels, and finally a routingtable indication (when cascading switches)

  • dsa_platform_data: platform device configuration data which can referencea collection of dsa_chip_data structures if multiple switches are cascaded,the conduit network device this switch tree is attached to needs to bereferenced

  • dsa_switch_tree: structure assigned to the conduit network device underdsa_ptr, this structure references a dsa_platform_data structure as well asthe tagging protocol supported by the switch tree, and which receive/transmitfunction hooks should be invoked, information about the directly attachedswitch is also provided: CPU port. Finally, a collection of dsa_switch arereferenced to address individual switches in the tree.

  • dsa_switch: structure describing a switch device in the tree, referencingadsa_switch_tree as a backpointer, user network devices, conduit networkdevice, and a reference to the backing``dsa_switch_ops``

  • dsa_switch_ops: structure referencing function pointers, see below for afull description.

Design limitations

Lack of CPU/DSA network devices

DSA does not currently create user network devices for the CPU or DSA ports, asdescribed before. This might be an issue in the following cases:

  • inability to fetch switch CPU port statistics counters using ethtool, whichcan make it harder to debug MDIO switch connected using xMII interfaces

  • inability to configure the CPU port link parameters based on the Ethernetcontroller capabilities attached to it:http://patchwork.ozlabs.org/patch/509806/

  • inability to configure specific VLAN IDs / trunking VLANs between switcheswhen using a cascaded setup

Common pitfalls using DSA setups

Once a conduit network device is configured to use DSA (dev->dsa_ptr becomesnon-NULL), and the switch behind it expects a tagging protocol, this networkinterface can only exclusively be used as a conduit interface. Sending packetsdirectly through this interface (e.g.: opening a socket using this interface)will not make us go through the switch tagging protocol transmit function, sothe Ethernet switch on the other end, expecting a tag will typically drop thisframe.

Interactions with other subsystems

DSA currently leverages the following subsystems:

  • MDIO/PHY library:drivers/net/phy/phy.c,mdio_bus.c

  • Switchdev:net/switchdev/*

  • Device Tree for various of_* functions

  • Devlink:net/core/devlink.c

MDIO/PHY library

User network devices exposed by DSA may or may not be interfacing with PHYdevices (structphy_device as defined ininclude/linux/phy.h), but the DSAsubsystem deals with all possible combinations:

  • internal PHY devices, built into the Ethernet switch hardware

  • external PHY devices, connected via an internal or external MDIO bus

  • internal PHY devices, connected via an internal MDIO bus

  • special, non-autonegotiated or non MDIO-managed PHY devices: SFPs, MoCA; a.k.afixed PHYs

The PHY configuration is done by thedsa_user_phy_setup() function and thelogic basically looks like this:

  • if Device Tree is used, the PHY device is looked up using the standard“phy-handle” property, if found, this PHY device is created and registeredusingof_phy_connect()

  • if Device Tree is used and the PHY device is “fixed”, that is, conforms tothe definition of a non-MDIO managed PHY as defined inDocumentation/devicetree/bindings/net/fixed-link.txt, the PHY is registeredand connected transparently using the special fixed MDIO bus driver

  • finally, if the PHY is built into the switch, as is very common withstandalone switch packages, the PHY is probed using the user MII bus createdby DSA

SWITCHDEV

DSA directly utilizes SWITCHDEV when interfacing with the bridge layer, andmore specifically with its VLAN filtering portion when configuring VLANs on topof per-port user network devices. As of today, the only SWITCHDEV objectssupported by DSA are the FDB and VLAN objects.

Devlink

DSA registers one devlink device per physical switch in the fabric.For each devlink device, every physical port (i.e. user ports, CPU ports, DSAlinks or unused ports) is exposed as a devlink port.

DSA drivers can make use of the following devlink features:

  • Regions: debugging feature which allows user space to dump driver-definedareas of hardware information in a low-level, binary format. Both globalregions as well as per-port regions are supported. It is possible to exportdevlink regions even for pieces of data that are already exposed in some wayto the standard iproute2 user space programs (ip-link, bridge), like addresstables and VLAN tables. For example, this might be useful if the tablescontain additional hardware-specific details which are not visible throughthe iproute2 abstraction, or it might be useful to inspect these tables onthe non-user ports too, which are invisible to iproute2 because no networkinterface is registered for them.

  • Params: a feature which enables user to configure certain low-level tunableknobs pertaining to the device. Drivers may implement applicable genericdevlink params, or may add new device-specific devlink params.

  • Resources: a monitoring feature which enables users to see the degree ofutilization of certain hardware tables in the device, such as FDB, VLAN, etc.

  • Shared buffers: a QoS feature for adjusting and partitioning memory and framereservations per port and per traffic class, in the ingress and egressdirections, such that low-priority bulk traffic does not impede theprocessing of high-priority critical traffic.

For more details, consultDocumentation/networking/devlink/.

Device Tree

DSA features a standardized binding which is documented inDocumentation/devicetree/bindings/net/dsa/dsa.txt. PHY/MDIO library helperfunctions such asof_get_phy_mode(),of_phy_connect() are also used to queryper-port PHY specific details: interface connection, MDIO bus location, etc.

Driver development

DSA switch drivers need to implement adsa_switch_ops structure which willcontain the various members described below.

Probing, registration and device lifetime

DSA switches are regulardevice structures on buses (be they platform, SPI,I2C, MDIO or otherwise). The DSA framework is not involved in their probingwith the device core.

Switch registration from the perspective of a driver means passing a validstructdsa_switch pointer todsa_register_switch(), usually from theswitch driver’s probing function. The following members must be valid in theprovided structure:

  • ds->dev: will be used to parse the switch’s OF node or platform data.

  • ds->num_ports: will be used to create the port list for this switch, andto validate the port indices provided in the OF node.

  • ds->ops: a pointer to thedsa_switch_ops structure holding the DSAmethod implementations.

  • ds->priv: backpointer to a driver-private data structure which can beretrieved in all further DSA method callbacks.

In addition, the following flags in thedsa_switch structure may optionallybe configured to obtain driver-specific behavior from the DSA core. Theirbehavior when set is documented through comments ininclude/net/dsa.h.

  • ds->vlan_filtering_is_global

  • ds->needs_standalone_vlan_filtering

  • ds->configure_vlan_while_not_filtering

  • ds->untag_bridge_pvid

  • ds->assisted_learning_on_cpu_port

  • ds->mtu_enforcement_ingress

  • ds->fdb_isolation

Internally, DSA keeps an array of switch trees (group of switches) global tothe kernel, and attaches adsa_switch structure to a tree on registration.The tree ID to which the switch is attached is determined by the first u32number of thedsa,member property of the switch’s OF node (0 if missing).The switch ID within the tree is determined by the second u32 number of thesame OF property (0 if missing). Registering multiple switches with the sameswitch ID and tree ID is illegal and will cause an error. Using platform data,a single switch and a single switch tree is permitted.

In case of a tree with multiple switches, probing takes place asymmetrically.The first N-1 callers ofdsa_register_switch() only add their ports to theport list of the tree (dst->ports), each port having a backpointer to itsassociated switch (dp->ds). Then, these switches exit theirdsa_register_switch() call early, becausedsa_tree_setup_routing_table()has determined that the tree is not yet complete (not all ports referenced byDSA links are present in the tree’s port list). The tree becomes complete whenthe last switch callsdsa_register_switch(), and this triggers the effectivecontinuation of initialization (including the call tods->ops->setup()) forall switches within that tree, all as part of the calling context of the lastswitch’s probe function.

The opposite of registration takes place when callingdsa_unregister_switch(),which removes a switch’s ports from the port list of the tree. The entire treeis torn down when the first switch unregisters.

It is mandatory for DSA switch drivers to implement theshutdown() callbackof their respective bus, and calldsa_switch_shutdown() from it (a minimalversion of the full teardown performed bydsa_unregister_switch()).The reason is that DSA keeps a reference on the conduit net device, and if thedriver for the conduit device decides to unbind on shutdown, DSA’s referencewill block that operation from finalizing.

Eitherdsa_switch_shutdown() ordsa_unregister_switch() must be called,but not both, and the device driver model permits the bus’remove() methodto be called even ifshutdown() was already called. Therefore, drivers areexpected to implement a mutual exclusion method betweenremove() andshutdown() by setting their drvdata to NULL after any of these has run, andchecking whether the drvdata is NULL before proceeding to take any action.

Afterdsa_switch_shutdown() ordsa_unregister_switch() was called, nofurther callbacks via the provideddsa_switch_ops may take place, and thedriver may free the data structures associated with thedsa_switch.

Switch configuration

  • get_tag_protocol: this is to indicate what kind of tagging protocol issupported, should be a valid value from thedsa_tag_protocol enum.The returned information does not have to be static; the driver is passed theCPU port number, as well as the tagging protocol of a possibly stackedupstream switch, in case there are hardware limitations in terms of supportedtag formats.

  • change_tag_protocol: when the default tagging protocol has compatibilityproblems with the conduit or other issues, the driver may support changing itat runtime, either through a device tree property or through sysfs. In thatcase, further calls toget_tag_protocol should report the protocol incurrent use.

  • setup: setup function for the switch, this function is responsible for settingup thedsa_switch_ops private structure with all it needs: register maps,interrupts, mutexes, locks, etc. This function is also expected to properlyconfigure the switch to separate all network interfaces from each other, thatis, they should be isolated by the switch hardware itself, typically by creatinga Port-based VLAN ID for each port and allowing only the CPU port and thespecific port to be in the forwarding vector. Ports that are unused by theplatform should be disabled. Past this function, the switch is expected to befully configured and ready to serve any kind of request. It is recommendedto issue a software reset of the switch during this setup function in order toavoid relying on what a previous software agent such as a bootloader/firmwaremay have previously configured. The method responsible for undoing anyapplicable allocations or operations done here isteardown.

  • port_setup andport_teardown: methods for initialization anddestruction of per-port data structures. It is mandatory for some operationssuch as registering and unregistering devlink port regions to be done fromthese methods, otherwise they are optional. A port will be torn down only ifit has been previously set up. It is possible for a port to be set up duringprobing only to be torn down immediately afterwards, for example in case itsPHY cannot be found. In this case, probing of the DSA switch continueswithout that particular port.

  • port_change_conduit: method through which the affinity (association usedfor traffic termination purposes) between a user port and a CPU port can bechanged. By default all user ports from a tree are assigned to the firstavailable CPU port that makes sense for them (most of the times this meansthe user ports of a tree are all assigned to the same CPU port, except for Htopologies as described incommit 2c0b03258b8b). Theport argumentrepresents the index of the user port, and theconduit argument representsthe new DSA conduitnet_device. The CPU port associated with the newconduit can be retrieved by looking atstructdsa_port*cpu_dp=conduit->dsa_ptr. Additionally, the conduit can also be a LAG device whereall the slave devices are physical DSA conduits. LAG DSA also have avalidconduit->dsa_ptr pointer, however this is not unique, but rather aduplicate of the first physical DSA conduit’s (LAG slave)dsa_ptr. In caseof a LAG DSA conduit, a further call toport_lag_join will be emittedseparately for the physical CPU ports associated with the physical DSAconduits, requesting them to create a hardware LAG associated with the LAGinterface.

PHY devices and link management

  • get_phy_flags: Some switches are interfaced to various kinds of Ethernet PHYs,if the PHY library PHY driver needs to know about information it cannot obtainon its own (e.g.: coming from switch memory mapped registers), this functionshould return a 32-bit bitmask of “flags” that is private between the switchdriver and the Ethernet PHY driver indrivers/net/phy/\*.

  • phy_read: Function invoked by the DSA user MDIO bus when attempting to readthe switch port MDIO registers. If unavailable, return 0xffff for each read.For builtin switch Ethernet PHYs, this function should allow reading the linkstatus, auto-negotiation results, link partner pages, etc.

  • phy_write: Function invoked by the DSA user MDIO bus when attempting to writeto the switch port MDIO registers. If unavailable return a negative errorcode.

  • adjust_link: Function invoked by the PHY library when a user network deviceis attached to a PHY device. This function is responsible for appropriatelyconfiguring the switch port link parameters: speed, duplex, pause based onwhat thephy_device is providing.

  • fixed_link_update: Function invoked by the PHY library, and specifically bythe fixed PHY driver asking the switch driver for link parameters that couldnot be auto-negotiated, or obtained by reading the PHY registers through MDIO.This is particularly useful for specific kinds of hardware such as QSGMII,MoCA or other kinds of non-MDIO managed PHYs where out of band linkinformation is obtained

Ethtool operations

  • get_strings: ethtool function used to query the driver’s strings, willtypically return statistics strings, private flags strings, etc.

  • get_ethtool_stats: ethtool function used to query per-port statistics andreturn their values. DSA overlays user network devices general statistics:RX/TX counters from the network device, with switch driver specific statisticsper port

  • get_sset_count: ethtool function used to query the number of statistics items

  • get_wol: ethtool function used to obtain Wake-on-LAN settings per-port, thisfunction may for certain implementations also query the conduit network deviceWake-on-LAN settings if this interface needs to participate in Wake-on-LAN

  • set_wol: ethtool function used to configure Wake-on-LAN settings per-port,direct counterpart to set_wol with similar restrictions

  • set_eee: ethtool function which is used to configure a switch port EEE (GreenEthernet) settings, can optionally invoke the PHY library to enable EEE at thePHY level if relevant. This function should enable EEE at the switch port MACcontroller and data-processing logic

  • get_eee: ethtool function which is used to query a switch port EEE settings,this function should return the EEE state of the switch port MAC controllerand data-processing logic as well as query the PHY for its currently configuredEEE settings

  • get_eeprom_len: ethtool function returning for a given switch the EEPROMlength/size in bytes

  • get_eeprom: ethtool function returning for a given switch the EEPROM contents

  • set_eeprom: ethtool function writing specified data to a given switch EEPROM

  • get_regs_len: ethtool function returning the register length for a givenswitch

  • get_regs: ethtool function returning the Ethernet switch internal registercontents. This function might require user-land code in ethtool topretty-print register values and registers

Power management

  • suspend: function invoked by the DSA platform device when the system goes tosuspend, should quiesce all Ethernet switch activities, but keep portsparticipating in Wake-on-LAN active as well as additional wake-up logic ifsupported

  • resume: function invoked by the DSA platform device when the system resumes,should resume all Ethernet switch activities and re-configure the switch to bein a fully active state

  • port_enable: function invoked by the DSA user network device ndo_openfunction when a port is administratively brought up, this function shouldfully enable a given switch port. DSA takes care of marking the port withBR_STATE_BLOCKING if the port is a bridge member, orBR_STATE_FORWARDING if itwas not, and propagating these changes down to the hardware

  • port_disable: function invoked by the DSA user network device ndo_closefunction when a port is administratively brought down, this function shouldfully disable a given switch port. DSA takes care of marking the port withBR_STATE_DISABLED and propagating changes to the hardware if this port isdisabled while being a bridge member

Address databases

Switching hardware is expected to have a table for FDB entries, however not allof them are active at the same time. An address database is the subset (partition)of FDB entries that is active (can be matched by address learning on RX, or FDBlookup on TX) depending on the state of the port. An address database mayoccasionally be called “FID” (Filtering ID) in this document, although theunderlying implementation may choose whatever is available to the hardware.

For example, all ports that belong to a VLAN-unaware bridge (which iscurrently VLAN-unaware) are expected to learn source addresses in thedatabase associated by the driver with that bridge (and not with otherVLAN-unaware bridges). During forwarding and FDB lookup, a packet received on aVLAN-unaware bridge port should be able to find a VLAN-unaware FDB entry havingthe same MAC DA as the packet, which is present on another port member of thesame bridge. At the same time, the FDB lookup process must be able to not findan FDB entry having the same MAC DA as the packet, if that entry points towardsa port which is a member of a different VLAN-unaware bridge (and is thereforeassociated with a different address database).

Similarly, each VLAN of each offloaded VLAN-aware bridge should have anassociated address database, which is shared by all ports which are members ofthat VLAN, but not shared by ports belonging to different bridges that aremembers of the same VID.

In this context, a VLAN-unaware database means that all packets are expected tomatch on it irrespective of VLAN ID (only MAC address lookup), whereas aVLAN-aware database means that packets are supposed to match based on the VLANID from the classified 802.1Q header (or the pvid if untagged).

At the bridge layer, VLAN-unaware FDB entries have the special VID value of 0,whereas VLAN-aware FDB entries have non-zero VID values. Note that aVLAN-unaware bridge may have VLAN-aware (non-zero VID) FDB entries, and aVLAN-aware bridge may have VLAN-unaware FDB entries. As in hardware, thesoftware bridge keeps separate address databases, and offloads to hardware theFDB entries belonging to these databases, through switchdev, asynchronouslyrelative to the moment when the databases become active or inactive.

When a user port operates in standalone mode, its driver should configure it touse a separate database called a port private database. This is different fromthe databases described above, and should impede operation as standalone port(packet in, packet out to the CPU port) as little as possible. For example,on ingress, it should not attempt to learn the MAC SA of ingress traffic, sincelearning is a bridging layer service and this is a standalone port, thereforeit would consume useless space. With no address learning, the port privatedatabase should be empty in a naive implementation, and in this case, allreceived packets should be trivially flooded to the CPU port.

DSA (cascade) and CPU ports are also called “shared” ports because they servicemultiple address databases, and the database that a packet should be associatedto is usually embedded in the DSA tag. This means that the CPU port maysimultaneously transport packets coming from a standalone port (which wereclassified by hardware in one address database), and from a bridge port (whichwere classified to a different address database).

Switch drivers which satisfy certain criteria are able to optimize the naiveconfiguration by removing the CPU port from the flooding domain of the switch,and just program the hardware with FDB entries pointing towards the CPU portfor which it is known that software is interested in those MAC addresses.Packets which do not match a known FDB entry will not be delivered to the CPU,which will save CPU cycles required for creating an skb just to drop it.

DSA is able to perform host address filtering for the following kinds ofaddresses:

  • Primary unicast MAC addresses of ports (dev->dev_addr). These areassociated with the port private database of the respective user port,and the driver is notified to install them throughport_fdb_add towardsthe CPU port.

  • Secondary unicast and multicast MAC addresses of ports (addresses addedthroughdev_uc_add() anddev_mc_add()). These are also associatedwith the port private database of the respective user port.

  • Local/permanent bridge FDB entries (BR_FDB_LOCAL). These are the MACaddresses of the bridge ports, for which packets must be terminated locallyand not forwarded. They are associated with the address database for thatbridge.

  • Static bridge FDB entries installed towards foreign (non-DSA) interfacespresent in the same bridge as some DSA switch ports. These are alsoassociated with the address database for that bridge.

  • Dynamically learned FDB entries on foreign interfaces present in the samebridge as some DSA switch ports, only ifds->assisted_learning_on_cpu_portis set to true by the driver. These are associated with the address databasefor that bridge.

For various operations detailed below, DSA provides adsa_db structurewhich can be of the following types:

  • DSA_DB_PORT: the FDB (or MDB) entry to be installed or deleted belongs tothe port private database of user portdb->dp.

  • DSA_DB_BRIDGE: the entry belongs to one of the address databases of bridgedb->bridge. Separation between the VLAN-unaware database and the per-VIDdatabases of this bridge is expected to be done by the driver.

  • DSA_DB_LAG: the entry belongs to the address database of LAGdb->lag.Note:DSA_DB_LAG is currently unused and may be removed in the future.

The drivers which act upon thedsa_db argument inport_fdb_add,port_mdb_add etc should declareds->fdb_isolation as true.

DSA associates each offloaded bridge and each offloaded LAG with a one-based ID(structdsa_bridge::num,structdsa_lag::id) for the purposes ofrefcounting addresses on shared ports. Drivers may piggyback on DSA’s numberingscheme (the ID is readable throughdb->bridge.num anddb->lag.id or mayimplement their own.

Only the drivers which declare support for FDB isolation are notified of FDBentries on the CPU port belonging toDSA_DB_PORT databases.For compatibility/legacy reasons,DSA_DB_BRIDGE addresses are notified todrivers even if they do not support FDB isolation. However,db->bridge.numanddb->lag.id are always set to 0 in that case (to denote the lack ofisolation, for refcounting purposes).

Note that it is not mandatory for a switch driver to implement physicallyseparate address databases for each standalone user port. Since FDB entries inthe port private databases will always point to the CPU port, there is no riskfor incorrect forwarding decisions. In this case, all standalone ports mayshare the same database, but the reference counting of host-filtered addresses(not deleting the FDB entry for a port’s MAC address if it’s still in use byanother port) becomes the responsibility of the driver, because DSA is unawarethat the port databases are in fact shared. This can be achieved by callingdsa_fdb_present_in_other_db() anddsa_mdb_present_in_other_db().The down side is that the RX filtering lists of each user port are in factshared, which means that user port A may accept a packet with a MAC DA itshouldn’t have, only because that MAC address was in the RX filtering list ofuser port B. These packets will still be dropped in software, however.

Bridge layer

Offloading the bridge forwarding plane is optional and handled by the methodsbelow. They may be absent, return -EOPNOTSUPP, ords->max_num_bridges maybe non-zero and exceeded, and in this case, joining a bridge port is stillpossible, but the packet forwarding will take place in software, and the portsunder a software bridge must remain configured in the same way as forstandalone operation, i.e. have all bridging service functions (addresslearning etc) disabled, and send all received packets to the CPU port only.

Concretely, a port starts offloading the forwarding plane of a bridge once itreturns success to theport_bridge_join method, and stops doing so afterport_bridge_leave has been called. Offloading the bridge means autonomouslylearning FDB entries in accordance with the software bridge port’s state, andautonomously forwarding (or flooding) received packets without CPU intervention.This is optional even when offloading a bridge port. Tagging protocol driversare expected to calldsa_default_offload_fwd_mark(skb) for packets whichhave already been autonomously forwarded in the forwarding domain of theingress switch port. DSA, throughdsa_port_devlink_setup(), considers allswitch ports part of the same tree ID to be part of the same bridge forwardingdomain (capable of autonomous forwarding to each other).

Offloading the TX forwarding process of a bridge is a distinct concept fromsimply offloading its forwarding plane, and refers to the ability of certaindriver and tag protocol combinations to transmit a single skb coming from thebridge device’s transmit function to potentially multiple egress ports (andthereby avoid its cloning in software).

Packets for which the bridge requests this behavior are called data planepackets and haveskb->offload_fwd_mark set to true in the tag protocoldriver’sxmit function. Data plane packets are subject to FDB lookup,hardware learning on the CPU port, and do not override the port STP state.Additionally, replication of data plane packets (multicast, flooding) ishandled in hardware and the bridge driver will transmit a single skb for eachpacket that may or may not need replication.

When the TX forwarding offload is enabled, the tag protocol driver isresponsible to inject packets into the data plane of the hardware towards thecorrect bridging domain (FID) that the port is a part of. The port may beVLAN-unaware, and in this case the FID must be equal to the FID used by thedriver for its VLAN-unaware address database associated with that bridge.Alternatively, the bridge may be VLAN-aware, and in that case, it is guaranteedthat the packet is also VLAN-tagged with the VLAN ID that the bridge processedthis packet in. It is the responsibility of the hardware to untag the VID onthe egress-untagged ports, or keep the tag on the egress-tagged ones.

  • port_bridge_join: bridge layer function invoked when a given switch port isadded to a bridge, this function should do what’s necessary at the switchlevel to permit the joining port to be added to the relevant logicaldomain for it to ingress/egress traffic with other members of the bridge.By setting thetx_fwd_offload argument to true, the TX forwarding processof this bridge is also offloaded.

  • port_bridge_leave: bridge layer function invoked when a given switch port isremoved from a bridge, this function should do what’s necessary at theswitch level to deny the leaving port from ingress/egress traffic from theremaining bridge members.

  • port_stp_state_set: bridge layer function invoked when a given switch port STPstate is computed by the bridge layer and should be propagated to switchhardware to forward/block/learn traffic.

  • port_bridge_flags: bridge layer function invoked when a port mustconfigure its settings for e.g. flooding of unknown traffic or source addresslearning. The switch driver is responsible for initial setup of thestandalone ports with address learning disabled and egress flooding of alltypes of traffic, then the DSA core notifies of any change to the bridge portflags when the port joins and leaves a bridge. DSA does not currently managethe bridge port flags for the CPU port. The assumption is that addresslearning should be statically enabled (if supported by the hardware) on theCPU port, and flooding towards the CPU port should also be enabled, due to alack of an explicit address filtering mechanism in the DSA core.

  • port_fast_age: bridge layer function invoked when flushing thedynamically learned FDB entries on the port is necessary. This is called whentransitioning from an STP state where learning should take place to an STPstate where it shouldn’t, or when leaving a bridge, or when address learningis turned off viaport_bridge_flags.

Bridge VLAN filtering

  • port_vlan_filtering: bridge layer function invoked when the bridge getsconfigured for turning on or off VLAN filtering. If nothing specific needs tobe done at the hardware level, this callback does not need to be implemented.When VLAN filtering is turned on, the hardware must be programmed withrejecting 802.1Q frames which have VLAN IDs outside of the programmed allowedVLAN ID map/rules. If there is no PVID programmed into the switch port,untagged frames must be rejected as well. When turned off the switch mustaccept any 802.1Q frames irrespective of their VLAN ID, and untagged frames areallowed.

  • port_vlan_add: bridge layer function invoked when a VLAN is configured(tagged or untagged) for the given switch port. The CPU port becomes a memberof a VLAN only if a foreign bridge port is also a member of it (andforwarding needs to take place in software), or the VLAN is installed to theVLAN group of the bridge device itself, for termination purposes(bridgevlanadddevbr0vid100self). VLANs on shared ports arereference counted and removed when there is no user left. Drivers do not needto manually install a VLAN on the CPU port.

  • port_vlan_del: bridge layer function invoked when a VLAN is removed from thegiven switch port

  • port_fdb_add: bridge layer function invoked when the bridge wants to install aForwarding Database entry, the switch hardware should be programmed with thespecified address in the specified VLAN Id in the forwarding databaseassociated with this VLAN ID.

  • port_fdb_del: bridge layer function invoked when the bridge wants to remove aForwarding Database entry, the switch hardware should be programmed to deletethe specified MAC address from the specified VLAN ID if it was mapped intothis port forwarding database

  • port_fdb_dump: bridge bypass function invoked byndo_fdb_dump on thephysical DSA port interfaces. Since DSA does not attempt to keep in sync itshardware FDB entries with the software bridge, this method is implemented asa means to view the entries visible on user ports in the hardware database.The entries reported by this function have theself flag in the output ofthebridgefdbshow command.

  • port_mdb_add: bridge layer function invoked when the bridge wants to installa multicast database entry. The switch hardware should be programmed with thespecified address in the specified VLAN ID in the forwarding databaseassociated with this VLAN ID.

  • port_mdb_del: bridge layer function invoked when the bridge wants to remove amulticast database entry, the switch hardware should be programmed to deletethe specified MAC address from the specified VLAN ID if it was mapped intothis port forwarding database.

Link aggregation

Link aggregation is implemented in the Linux networking stack by the bondingand team drivers, which are modeled as virtual, stackable network interfaces.DSA is capable of offloading a link aggregation group (LAG) to hardware thatsupports the feature, and supports bridging between physical ports and LAGs,as well as between LAGs. A bonding/team interface which holds multiple physicalports constitutes a logical port, although DSA has no explicit concept of alogical port at the moment. Due to this, events where a LAG joins/leaves abridge are treated as if all individual physical ports that are members of thatLAG join/leave the bridge. Switchdev port attributes (VLAN filtering, STPstate, etc) and objects (VLANs, MDB entries) offloaded to a LAG as bridge portare treated similarly: DSA offloads the same switchdev object / port attributeon all members of the LAG. Static bridge FDB entries on a LAG are not yetsupported, since the DSA driver API does not have the concept of a logical portID.

  • port_lag_join: function invoked when a given switch port is added to aLAG. The driver may return-EOPNOTSUPP, and in this case, DSA will fallback to a software implementation where all traffic from this port is sent tothe CPU.

  • port_lag_leave: function invoked when a given switch port leaves a LAGand returns to operation as a standalone port.

  • port_lag_change: function invoked when the link state of any member ofthe LAG changes, and the hashing function needs rebalancing to only make useof the subset of physical LAG member ports that are up.

Drivers that benefit from having an ID associated with each offloaded LAGcan optionally populateds->num_lag_ids from thedsa_switch_ops::setupmethod. The LAG ID associated with a bonding/team interface can then beretrieved by a DSA switch driver using thedsa_lag_id function.

IEC 62439-2 (MRP)

The Media Redundancy Protocol is a topology management protocol optimized forfast fault recovery time for ring networks, which has some componentsimplemented as a function of the bridge driver. MRP uses management PDUs(Test, Topology, LinkDown/Up, Option) sent at a multicast destination MACaddress range of 01:15:4e:00:00:0x and with an EtherType of 0x88e3.Depending on the node’s role in the ring (MRM: Media Redundancy Manager,MRC: Media Redundancy Client, MRA: Media Redundancy Automanager), certain MRPPDUs might need to be terminated locally and others might need to be forwarded.An MRM might also benefit from offloading to hardware the creation andtransmission of certain MRP PDUs (Test).

Normally an MRP instance can be created on top of any network interface,however in the case of a device with an offloaded data path such as DSA, it isnecessary for the hardware, even if it is not MRP-aware, to be able to extractthe MRP PDUs from the fabric before the driver can proceed with the softwareimplementation. DSA today has no driver which is MRP-aware, therefore it onlylistens for the bare minimum switchdev objects required for the software assistto work properly. The operations are detailed below.

  • port_mrp_add andport_mrp_del: notifies driver when an MRP instancewith a certain ring ID, priority, primary port and secondary port iscreated/deleted.

  • port_mrp_add_ring_role andport_mrp_del_ring_role: function invokedwhen an MRP instance changes ring roles between MRM or MRC. This affectswhich MRP PDUs should be trapped to software and which should be autonomouslyforwarded.

IEC 62439-3 (HSR/PRP)

The Parallel Redundancy Protocol (PRP) is a network redundancy protocol whichworks by duplicating and sequence numbering packets through two independent L2networks (which are unaware of the PRP tail tags carried in the packets), andeliminating the duplicates at the receiver. The High-availability SeamlessRedundancy (HSR) protocol is similar in concept, except all nodes that carrythe redundant traffic are aware of the fact that it is HSR-tagged (because HSRuses a header with an EtherType of 0x892f) and are physically connected in aring topology. Both HSR and PRP use supervision frames for monitoring thehealth of the network and for discovery of other nodes.

In Linux, both HSR and PRP are implemented in the hsr driver, whichinstantiates a virtual, stackable network interface with two member ports.The driver only implements the basic roles of DANH (Doubly Attached Nodeimplementing HSR), DANP (Doubly Attached Node implementing PRP) and RedBox(allows non-HSR devices to connect to the ring via Interlink ports).

A driver which is able of offloading certain functions should declare thecorresponding netdev features as indicated by the documentation atDocumentation/networking/netdev-features.rst. Additionally, the followingmethods must be implemented:

  • port_hsr_join: function invoked when a given switch port is added to aDANP/DANH. The driver may return-EOPNOTSUPP and in this case, DSA willfall back to a software implementation where all traffic from this port issent to the CPU.

  • port_hsr_leave: function invoked when a given switch port leaves aDANP/DANH and returns to normal operation as a standalone port.

Note that theNETIF_F_HW_HSR_DUP feature relies on transmission towardsmultiple ports, which is generally available whenever the tagging protocol usesthedsa_xmit_port_mask() helper function. If the helper is used, the HSRoffload feature should also be set. Thedsa_port_simple_hsr_join() anddsa_port_simple_hsr_leave() methods can be used as generic implementationsofport_hsr_join andport_hsr_leave, if this is the only supportedoffload feature.

TODO

Making SWITCHDEV and DSA converge towards an unified codebase

SWITCHDEV properly takes care of abstracting the networking stack with offloadcapable hardware, but does not enforce a strict switch device driver model. Onthe other DSA enforces a fairly strict device driver model, and deals with mostof the switch specific. At some point we should envision a merger between thesetwo subsystems and get the best of both worlds.