| { | |
| "title": "State-Compute Replication: Parallelizing High-Speed Stateful Packet Processing", | |
| "abstract": "With the slowdown of Moore\u2019s law, CPU-oriented packet processing in\nsoftware will be significantly outpaced by emerging line speeds of\nnetwork interface cards (NICs). Single-core packet-processing\nthroughput has saturated.", | |
| "sections": [ | |
| { | |
| "section_id": "1", | |
| "parent_section_id": null, | |
| "section_name": "Introduction", | |
| "text": "Designing software to handle high packet-processing loads is crucial\nin networked systems. For example, software load balancers, CDN nodes,\nDDoS mitigators, and many other middleboxes depend on it. Yet, with\nthe slowdown of Moore\u2019s law, software packet processing has struggled\nto keep up with line speeds of network interface cards (NICs), with\nemerging speeds of 200 Gbit/s and\nbeyond [20 ###reference_b20###]. Consequently, there have been\nsignificant efforts to speed up packet processing through better\nnetwork stack design, removing user-kernel crossings, running software\nat lower layers of the stack (e.g. NIC device driver), and designing\nbetter host interconnects.\nWe consider the problem of scaling software packet processing by using\nmultiple cores on a server. The key challenge is that many\npacket-processing applications are stateful, maintaining and\nupdating regions of memory across many packets. If multiple cores\ncontend to access the same memory regions, there is significant memory\ncontention and cache bouncing, resulting in poor performance. Hence,\nthe classic approach to multicore scaling is to process packets\ntouching distinct states, i.e. flows, on different cores, hence\nremoving memory contention and synchronization. For example, a load\nbalancer that maintains a separate backend server for each 5-tuple may\nsend all packets of a given 5-tuple to a fixed core, but process\ndifferent 5-tuples on different cores, hence scaling performance with\nmultiple cores. Many prior efforts have looked into optimizing such\nsharding-oriented solutions [44 ###reference_b44###, 64 ###reference_b64###, 51 ###reference_b51###, 34 ###reference_b34###], including the\nrecent application of automatic code parallelization\ntechnology [59 ###reference_b59###].\nHowever, we believe that the existing\napproaches to multi-core scaling have run their course\n(\u00a72 ###reference_###). Realistic traffic workloads have\nheavy-tailed flow size distributions and are highly skewed. Large\n\u201celephant flows\u201d updating state on a single core will reduce total\nthroughput and inflate tail latencies for all packets, since they are\nlimited by the performance of a single CPU core. With emerging 200\nGbit/s\u20141 Tbit/s NICs, a single packet processing core may be too\nslow to keep up even with a single elephant flow. Additionally, with\nthe growing scales of volumetric resource exhaustion attacks, packet\nprocessors must gracefully handle attacks where adversaries force\npackets into a single flow [41 ###reference_b41###].\nThis paper introduces a scaling principle, state-compute\nreplication (SCR), that improves the software packet-processing\nthroughput for a single, stateful flow with additional cores,\nwhile avoiding shared memory and\ncontention. Figure 1 ###reference_### shows how SCR scales the throughput of a single TCP connection for a TCP connection\nstate tracker [39 ###reference_b39###] with more cores, when other scaling\ntechniques fail. A connection tracker may change its internal state\nwith each packet.\n###figure_1### SCR applies to any packet processing program that may be abstracted\nas a deterministic finite state machine. Intuitively, as long as each\ncore can reconstruct the flow state by processing all the\npackets of the flow, multiple cores can successfully process a single\nstateful flow with zero cross-core synchronization. However, naively\ngetting every CPU core to process every packet cannot improve\nthroughput with more cores. We need to meet two additional\nrequirements. First, in systems where CPU usage is dominated by\nper-packet work, we must preserve the total number of packets moving\nthrough the system. Second, we must divide up dispatch\u2014the\nCPU software labor of presenting the packets to the packet-processing\nprogram\u2014across multiple cores (\u00a73.1 ###reference_###). This leads to\nthe scaling principle below (more formal version in\n\u00a73.1 ###reference_###):\nState-Compute Replication (informal). In a system bound by\nper-packet CPU work and software dispatch, replicating\nthe state and the computation of a stateful flow across cores\nincreases throughput linearly with cores, so long as we preserve\nthe total number of packets traversing the system.\nTo meet the requirements for high performance in SCR, we design a\npacket history sequencer (\u00a73.2 ###reference_###), an entity\nwhich sees every packet and sprays packets across cores in a\nround-robin fashion, while piggybacking the relevant information from\npackets missed by a core on the next packet sent to that core. The\nsequencer must maintain a small bounded history of packet headers\nrelevant to the packet-processing program. A high-speed\npacket-processing pipeline, running either on a programmable NIC or a\ntop-of-the-rack switch, may serve as a sequencer. We present sequencer\ndesigns for two hardware platforms (\u00a73.3 ###reference_###), a\nTofino switch, and a Verilog RTL module we synthesized into the\nNetFPGA-PLUS reference pipeline.\nWe evaluated the scaling efficacy of SCR using a suite of realistic\nprograms and traffic (\u00a74 ###reference_###). SCR is the only technique\nwe are aware of that monotonically scales the total processing\nthroughput with additional cores regardless of the skewness in the\narriving flow size distribution. However, no technique can continue\nscaling indefinitely. In \u00a73.1 ###reference_### and \u00a74 ###reference_###, we\ndemonstrate the limits of SCR scaling, along with the significant\nperformance benefits to be enjoyed before hitting these\nlimits. Additionally, we show the resource usage for our sequencer\u2019s\nRTL design, which meets timing at 250 MHz. We believe this design is\nsimple and cheap enough to be added as an on-chip accelerator\nto a future NIC. We plan to release all software." | |
| }, | |
| { | |
| "section_id": "2", | |
| "parent_section_id": null, | |
| "section_name": "Background and Motivation", | |
| "text": "" | |
| }, | |
| { | |
| "section_id": "2.1", | |
| "parent_section_id": "2", | |
| "section_name": "High Speed Packet Processing", | |
| "text": "This paper considers packet-processing programs that must work at\nheavy network loads with quick turnarounds for packets. We are\nspecifically interested in applications implementing a \u201chairpin\u201d\ntraffic flow: receiving a packet, processing it as quickly as\npossible, and transmitting the packet right out, at the highest\npossible rate. Such programs exhibit compute and working set sizes\nthat are smaller in comparison to full-fledged applications (running\nin user space) or transport protocols (TCP stacks) running at\nendpoints.\nExamples of the kinds of applications we consider include (i)\nmiddlebox workloads (network functions), such as firewalls, connection\ntrackers, and intrusion detection/prevention systems; and (ii)\nhigh-volume compute-light applications such as key-value stores,\ntelemetry systems, and stream-based logging systems, which process\nmany requests with a small amount of computation per\nrequest [48 ###reference_b48###]. A key\ncharacteristic of such applications is their need for high\npacket-processing rate (which is more important than byte-processing\nrate) and the fact that their performance is primarily bottlenecked by\nCPU usage [65 ###reference_b65###, 69 ###reference_b69###, 42 ###reference_b42###].\nThe performance of such applications is mission-critical in many\nproduction systems. Even small performance improvements matter in\nInternet services with large server fleets and heavy traffic. As a\nspecific example, Meta\u2019s Katran layer-4 load balancer [8 ###reference_b8###]\nand CloudFlare\u2019s DDoS protection solution [42 ###reference_b42###]\nprocess every packet sent to those respective services, and must\nwithstand not only the high rate of request traffic destined to those\nservices but also large denial-of-service attacks. More generally, the\nacademic community has undertaken significant efforts for performance\noptimization of network functions, including optimization of the\nsoftware frameworks [44 ###reference_b44###], designing language-based\nhigh-performance isolation [58 ###reference_b58###], and developing\ncustom hardware offload solutions [60 ###reference_b60###].\nIn this paper, we consider applications developed within high-speed\npacket processing software frameworks. Given the slowdown of Moore\u2019s\nlaw and the end of Dennard scaling, the software packet-processing\nperformance of single CPU cores has saturated. Even expert developers\nmust work meticulously hard to improve per-core throughput by small\nmargins (like 10%) [47 ###reference_b47###, 49 ###reference_b49###, 36 ###reference_b36###, 63 ###reference_b63###]. The community has\npursued various efforts to improve performance, such as\nre-architecting the software stack to make efficiency\ngains [65 ###reference_b65###, 21 ###reference_b21###, 53 ###reference_b53###], introducing\nstateless hardware offloads working in conjunction with the software\nstack [29 ###reference_b29###, 19 ###reference_b19###], full-stack\nhardware offloads [30 ###reference_b30###, 33 ###reference_b33###], and kernel\nextensions [2 ###reference_b2###]. This paper considers\nsoftware frameworks that modify the device driver to enable\nprogrammable and custom functionality to be incorporated by developers\nat high performance with minimal intervention from the existing kernel\nsoftware stack [47 ###reference_b47###].\nSpecifically, we study performance and evaluate our techniques in the\ncontext of kernel extensions implemented using the eXpress Data Path\n(XDP/eBPF [47 ###reference_b47###]) within the Linux kernel. In\n\u00a73 ###reference_###, we will discuss how our observations and principles\napply more generally to other high-speed software packet-processing\nframeworks, including those written with user-space libraries like\nDPDK [21 ###reference_b21###]." | |
| }, | |
| { | |
| "section_id": "2.2", | |
| "parent_section_id": "2", | |
| "section_name": "Parallelizing Stateful Packet Processing", | |
| "text": "This paper considers the problem of parallelizing high-speed packet\nprocessing programs across multiple cores. The key challenge is\nhandling state: memory that must be updated in a sequential\norder upon processing each packet to produce a correct\nresult. Consider the example of the connection\ntracker [39 ###reference_b39###], a program which identifies the TCP\nconnection state (e.g. SYN sent, SYN/ACK sent, etc.) using packets\nobserved from both directions of a TCP connection. Each packet in the\nconnection may modify the internal connection state maintained by the\nprogram. There are two main techniques used to parallelize such\nprograms across cores.\nShared state parallelism. One could conceive a parallel\nimplementation that (arbitrarily) splits packets across multiple\ncores, with explicit synchronization or retry logic guarding access to the shared\nmemory, i.e. the TCP connection state, to ensure a correct result.\nShared-state parallelism works well when the contention to shared\nmemory is low. Specifically, shared-memory scaling could work well\nwhen (i) packets of a single flow arrive slowly enough, e.g. if there\nare a large number of connections with a roughly-equal packet arrival\nrate, or (ii) when there are efficient implementations available for\nsynchronization or atomic updates in\nsoftware [45 ###reference_b45###] or\nhardware [18 ###reference_b18###, 25 ###reference_b25###]. However, neither of these conditions\nare generally applicable. Many flow size distributions encountered in\npractical networks are highly skewed [35 ###reference_b35###, 71 ###reference_b71###] or exhibit highly bursty\nbehavior [66 ###reference_b66###], resulting in\nsignificant memory contention if packets from the heavier flows are\nspread across cores. Further, the state update operation in many\nprograms, including the TCP connection tracker, are too complex to be\nimplemented directly on atomic hardware, since the latter only\nsupports individual arithmetic and logic operations (like\nfetch-add-write). Our evaluation results (\u00a74 ###reference_###) show that\nthe performance of shared-state multicore processing plummets with\nmore cores under realistic flow size distributions.\nSharded (shared-nothing) parallelism. Today, the predominant\ntechnique to scale stateful packet processing across multiple cores is\nto process packets that update the same memory at the same core,\nsharding the overall state of the program across cores. Such sharding\nis usually achieved through techniques like Receive Side Scaling\n(RSS [4 ###reference_b4###]), available on modern NICs, to direct packets from the\nsame flow to the same core, and using shared-nothing data structures\non each core.\nHowever, sharding suffers from a few disadvantages. First, it is not\nalways possible to avoid coordination through sharding. There may be\nparts of the program state that are shared across all packets, such as\na list of free external ports in a Network Address Translation (NAT)\napplication. On the practical side, RSS implementations on today\u2019s\nNICs partition packets across cores using a limited number of\ncombinations of packet header fields. For example, a NIC may be\nconfigured to steer packets with a given source and destination IP\naddress to a fixed core. However, the granularity at which the\napplication wants to shard its state\u2014for example, a key-value cache\nmay seek to shard state by the key requested in the payload\u2014could be\ninfeasible to implement with the packet headers that are usable by RSS\nat the NIC [59 ###reference_b59###].\nSecond, sharding state may create load imbalance across cores if some\nflows are heavier or more bursty than others, creating hotspots on\nsingle CPU cores. Skewed flow size\ndistributions [35 ###reference_b35###], bursty flow\ntransmission patterns [66 ###reference_b66###], and\ndenial of service attacks [41 ###reference_b41###] create conditions ripe\nfor such imbalance. The research community has investigated solutions\nto balance the packet processing load by migrating flow shards across\nCPU cores [34 ###reference_b34###, 59 ###reference_b59###].\nHowever, the efficacy of re-balancing\nis limited by the granularity at which flows can be migrated across\ncores. As we show in our evaluation (\u00a74 ###reference_###), the throughput\nof the heaviest and most bursty flows is still limited by a single CPU\ncore, which in turn limits the total achieved throughput. Another\nalternative is to evenly spray incoming packets across\ncores [67 ###reference_b67###, 40 ###reference_b40###],\nassuming that only a small number of packets in each flow need to\nupdate the flow\u2019s state. If a core receives a packet that must update\nthe flow state, the packet is redirected to a designated core that\nperforms all writes to the state for that flow. However, the\nassumption that state is mostly read-only is not universal, e.g. a TCP\nconnection tracker may update state potentially on every\npacket. Further, packet reordering at the designated core can lead to\nincorrect results [34 ###reference_b34###]." | |
| }, | |
| { | |
| "section_id": "2.3", | |
| "parent_section_id": "2", | |
| "section_name": "Goals", | |
| "text": "Given the drawbacks of existing approaches for multi-core scaling\ndiscussed above\n(\u00a72.2 ###reference_###), we seek a\nscaling technique that achieves the following goals:\nGeneric stateful programming. The technique must produce\ncorrect results for general stateful updates, eschewing solutions\nthat only work for \u201csimple\u201d updates (i.e. fitting hardware\natomics) or only update state for a small number of packets per flow.\nSkew independence. The scaling technique should improve\nperformance independent of the incoming flow size distribution or\nhow the flows access the state in the program.\nMonotonic performance gain. Performance should\nimprove, not degrade or collapse, with additional cores." | |
| }, | |
| { | |
| "section_id": "3", | |
| "parent_section_id": null, | |
| "section_name": "State-Compute Replication (SCR)", | |
| "text": "In \u00a73.1 ###reference_###, we present scaling principles for multi-core\nstateful packet processing, to meet the goals in\n\u00a72.3 ###reference_###. In \u00a73.2 ###reference_### through\n\u00a73.4 ###reference_###, we show how to operationalize these\nprinciples." | |
| }, | |
| { | |
| "section_id": "3.1", | |
| "parent_section_id": "3", | |
| "section_name": "Scaling Principles", | |
| "text": "To simplify the discussion, suppose the packet-processing program is\ndeterministic, i.e. in every execution, it produces the same\noutput state and packet given a fixed input state and packet (we relax\nthis assumption in \u00a73.4 ###reference_###).\nPrinciple #1 (Replication for correctness). Sending every\npacket reliably to every core, and replicating the\nstate and computation on every core, produces the correct output state\nand packet on every core with no explicit cross-core\nsynchronization, regardless of how the state is accessed by packets.\nThis principle asks us to treat each core as one replica of a\nreplicated state machine running the deterministic packet-processing\nprogram. Each core processes packets in the same order, without\nmissing any packets. With each incoming packet, each core updates a\nprivate copy of its state, which is equal to the private copy on every\nother core. There is no need to synchronize explicitly. Further,\nreplication provides the benefit that the workload across cores is\neven regardless of how the state is accessed by packets, i.e. skew-independent.\nOne way to apply this principle naively is to broadcast every packet\nreceived externally on the machine to every core: with \ncores, for each external packet, the system will process internal packets, due to -fold packet duplication. However,\nartificially increasing the number of packets processed by the system\nwill significantly hurt performance.\nIn CPU-bound packet processing, smaller packets typically require the\nsame computation that larger packets do. That is, the total amount of\nwork performed by the system is proportional to the\npackets-per-second offered, rather than the\nbits-per-second [65 ###reference_b65###, 47 ###reference_b47###].\nSo how should one use replication for multi-core scaling?\nUnderstanding the dominant components of the per-packet CPU work in\nhigh-speed packet-processing frameworks offers insight. There are two\nparts to the CPU processing for each packet after the packet reaches\nthe core where it will be ultimately processed: (i) dispatch,\nthe CPU/software labor of presenting the packet to the user-developed\npacket-processing program, and signaling the packet(s) emitted by the\nprogram for transmission by the NIC; and (ii) the program\ncomputation running within the user-developed program\nitself. Dispatch often dominates the per-packet CPU\nwork [47 ###reference_b47###].\nWhile these observations are known in the context of high-speed packet\nprocessing, we also benchmarked a simple application on our own test\nmachine to validate them. Consider\nFigure 2 ###reference_###, where we show the\nthroughput (packets/second (a), bits/second (b)) and latency (c) of a\nsimple packet forwarder written in the XDP framework running on a\nsingle CPU core. Our testbed setup is described in much more detail\nlater (\u00a74 ###reference_###), but we briefly note here that our\ndevice-under-test is an Intel Ice Lake CPU configured to run at a\nfixed frequency of 3.6 GHz and attached to a 100 Gbit/s NIC. At each\npacket size below 1024 bytes, the CPU core is fully utilized (at 1024\nbytes the bottleneck is the NIC bandwidth). The achieved\npackets/second is stable across all packet sizes which are CPU bound\n(see the 2 RXQ curve). With a processing latency of roughly 14\nnanoseconds at all packet sizes (measured only for the XDP forwarding\nprogram), back-to-back packet processing should \u201cideally\u201d process\n 71 million packets/second. However, the achieved\nbest-case throughput ( 14 million packets/second) is much\nsmaller\u2014implying that significant CPU work is expended in presenting\nthe input packets to and extracting the output packets from the\nforwarder and setting up the NIC to transmit and receive those\npackets. This is not merely a feature of the framework we used (XDP);\nthe DPDK framework has similar dispatch\ncharacteristics [47 ###reference_b47###].\nOur key insight is that it is possible to replicate program\ncomputation without replicating dispatch, enabling multi-core\nperformance scaling. This leads us to the next principle:\n###figure_2### ###figure_3### ###figure_4### Principle #2 (State-Compute Replication). Piggybacking a bounded recent packet history on each packet sent to a core allows\nus to use replication (#1) while equalizing the external and internal\npackets-per-second CPU work in the system.\nPrinciple #2 states that replication (principle #1) is possible\nwithout increasing the total internal packet rate or per-packet CPU\nwork done by the system. Suppose it is possible to spray the\nincoming packets across cores in a round-robin fashion. If there are\n cores, each core receives every packet. Then:\nIt is unnecessary for each core to have the most\nup-to-the-packet private state at all times. For correctness, it\nis sufficient if a core has a \u201cfresh enough\u201d state to make a\ndecision on the current packet that it is processing.\nWith each new packet, suppose the core that receives it also\nsees (as metadata on that packet) all the packets from the\nlast time it processed a packet, i.e. a recent packet\nhistory. The core can simply \u201ccatch up\u201d on the computations\nrequired to obtain the most up-to-the-packet value for its private\nstate.\nIf packets are sprayed round-robin across cores, the number of\nhistoric packets needed to ensure that the most updated state is\navailable to process the current packet is equal to the\nnumber of cores. Further, just those packet bits required to\nupdate the shared state are necessary from each historic packet,\nallowing us to pack multiple packets\u2019 worth of history into a\nsingle packet received at a core.\nAs a simple model, suppose a system has cores, and each core can\ndispatch a single packet in cycles and runs a packet-processing\nprogram that computes over a single packet in cycles. For each\npiggybacked packet, the total processing time is . When dispatch time dominates compute time, . With \ncores, the total rate at which externally-arriving packets can be\nprocessed is . Hence, it is possible to scale the packet-processing rate linearly\nwith the number of cores . In App.A ###reference_###, we show that\na model like this indeed accurately predicts the empirical throughput\nachieved with a given number of cores.\nIntuitively, doing some extra \u201clightweight\u201d program computation per\npacket enables scaling the \u201cheavyweight\u201d dispatch computation with\nmore cores, while maintaining correctness.\nPrinciple #3 (Scaling limits). Principle #2 provides a linear\nscale-up in the packets-per-second throughput with more cores, so long\nas dispatch dominates the per-packet work.\nThe scaling benefits of principle #2 taper off beyond a point. Dispatch can\nbe overtaken as the primary contributor to per-packet CPU work, for\nexample, when (i) the compute time for each piggybacked\npacket becomes sizable; (ii) the per-packet compute time itself\nincreases due to overheads in the system, e.g. larger memory access\ntime when a core\u2019s copy of the state spills into a larger memory; or\n(iii) other components such as the NIC or PCIe become the bottleneck\nrather than the CPU. When this happens, the approximation in our\nsimple linear model () no longer holds, and the system\u2019s\npacket rate no longer scales linearly with cores." | |
| }, | |
| { | |
| "section_id": "3.2", | |
| "parent_section_id": "3", | |
| "section_name": "Operationalizing SCR", | |
| "text": "Operationalizing the scaling principles discussed above\n(\u00a73.1 ###reference_###) conceptually requires two pieces.\nA reliable packet history sequencer (\u00a73.3 ###reference_###).\nWe require an additional entity in the system, which we call a sequencer, to (i) steer packets across cores in round-robin\nfashion, (ii) maintain the most recent packet history across all\npackets arriving at the machine, and (iii) piggyback the\nhistory on each packet sent to the cores. After the packet is\nprocessed by a CPU core, its piggybacked history can be stripped off\non the return path either at the core itself or at the sequencer.\nThe size of the packet history depends only on the\nnumber of cores and metadata size (\u00a73.1 ###reference_###) and\nis independent of the number of active flows.\nThe NIC hardware or the top-of-the-rack switch are natural points to\nintroduce the sequencer functionality, since they observe all the\npackets flowing into and out of the machine. Today\u2019s existing\nfixed-function NICs do not implement the functionality necessary to\nconstruct and piggyback a reliable packet history. However, we have\nidentified two possible instantiations that could, in the near future,\nachieve this: (i) emerging NICs, e.g. with programmable\npipelines [27 ###reference_b27###, 3 ###reference_b3###, 1 ###reference_b1###], implementing the full\nfunctionality of the reliable sequencer; or (ii) a combination of a\nNIC implementing round-robin packet steering [4 ###reference_b4###, 7 ###reference_b7###] and a programmable top-of-the-rack switch\npipeline [37 ###reference_b37###, 23 ###reference_b23###, 17 ###reference_b17###] for maintaining and\npiggybacking the packet history. We believe that either of these\ninstantiations may be realistic and applicable given the context: for\nexample, high-speed programmable NICs are already common in some large\nproduction systems [43 ###reference_b43###], as are programmable\nswitch pipelines [57 ###reference_b57###]. We will show two possible\nhardware designs in \u00a73.3 ###reference_###. Hereafter, for brevity,\nwe refer to both of these designs as simply sequencers.\nAn SCR-aware packet-processing program. The packet-processing\nprogram must be developed to replicate the program state and keep\nprivate copies per core. Further, the program must process the packet\nhistory first before computing over the current packet. We discuss a\nrunning example that demonstrates how to transform a single-threaded\nprogram to its SCR-aware variant in App.C ###reference_###. We believe\nthat these program transformations can be automated in the future.\n###figure_5### ###figure_6### An example showing scaling principles in action. Consider\nFigure 3 ###reference_###, where a sequencer and three cores are used to run a\npacket-processing program. As shown in Figure 3a ###reference_sf1###, the\nsequencer sprays packets (i.e. ) in a round-robin\nfashion across cores (i.e. ). Further,\nthe sequencer stores the recent packet history consisting of the packet\nfields from the last packets which are relevant to evolving the\nflow state.\nWe denote the relevant part of a packet by\n. For example, in a TCP connection tracking program, this includes\nthe TCP 4-tuple, the TCP flags, and sequence and ACK numbers.\nNote that this\npacket history is updated only by the sequencer and is never written to\nby the cores. In the example in Figure 3a ###reference_sf1###, the packet history\nsupplied to processing packet is . As shown in Figure 3b ###reference_sf2###, each core updates its\nlocal private state, first fast-forwarding the state by running the\nprogram through the packet history , and then\nprocessing the packet sprayed to it.\nIf the packet-processing program is deterministic\n(\u00a73.1 ###reference_###), an SCR-aware program is guaranteed to produce\nthe correct output state and packet if every CPU core is guaranteed to\nreceive the packets sent to it by the sequencer. We discuss how to\nhandle packet loss and nondeterminism in \u00a73.4 ###reference_###." | |
| }, | |
| { | |
| "section_id": "3.3", | |
| "parent_section_id": "3", | |
| "section_name": "Packet History Sequencer", | |
| "text": "###figure_7### ###figure_8### ###figure_9### The primary goal of the sequencer is to maintain and propagate recent\npacket history to CPU cores to help replicate the computation with the\ncorrect program state (\u00a73.2 ###reference_###). We assume the NIC\nis already capable of spraying packets across CPU cores [4 ###reference_b4###, 7 ###reference_b7###], and hence do not discuss that functionality\nfurther. We describe the rest of the sequencer\u2019s functions in\nterms of the following: (i) designing a packet format that modifies\nexisting packets to piggyback history from the sequencer to the CPU\ncores; (ii) designing a hardware data structure that maintains a\nrecent bounded packet history at the sequencer, and enables reading\nout the history into metadata on the packet. The packet fields that\nare maintained in the sequencer history depend on the specific fields\nused by the packet-processing application. The number of historic\npackets that must be tracked depends on the degree of parallelism that\nis sought, e.g. the number of available CPU cores over which scaling is\nimplemented.\nWe have implemented sequencing hardware data structures on two\nplatforms, the Tofino programmable switch pipeline [23 ###reference_b23###] and a\nVerilog module that we integrated into the NetFPGA-PLUS\nproject [12 ###reference_b12###]." | |
| }, | |
| { | |
| "section_id": "3.3.1", | |
| "parent_section_id": "3.3", | |
| "section_name": "3.3.1 Packet format", | |
| "text": "The key question answered in this subsection is: given a packet, what\nis the best place to put the packet history on it? While this may\nappear \u201cjust an engineering detail\u201d, designing the right packet\nformat has important implications to the design of hardware data\nstructures on the sequencer and the SCR-aware program.\nAs shown in Figure 4a ###reference_sf1###, we choose to place the packet history\nclose to the beginning of the packet, before the entirety of the\noriginal packet. Relative to placing the packet history between\nheaders of the original packet, this placement simplifies the hardware\nlogic that writes the history into the packet, as the write must\nalways occur at a fixed address (0) in the packet buffer. Further, for\nreasons explained in \u00a73.3.2 ###reference_.SSS2###, we include a\npointer to the metadata of the packet that arrived the earliest among\nthe ones in the piggybacked history. The earliest packet does not\nalways correspond to the first piece of metadata when reading the\nbytes of the packet in order.\nKeeping all the bytes of the original packet together in one place\nalso simplifies developing an SCR-aware packet-processing\nprogram. The packet parsing logic of the original program can remain\nunmodified if the program starts parsing from the location in the\nmodified packet buffer which contains all the bytes of the original\npacket in order.\nFinally, we also prefix an additional Ethernet header to the packet in\ninstantiations of the sequencer which run outside of the NIC, i.e. a\ntop-of-the-rack switch. Adding this header helps the NIC process the\npacket correctly: without it, the packet appears to have an\nill-formatted Ethernet MAC header at the NIC. Our setup also uses this\nEthernet header to force RSS on the NIC [4 ###reference_b4###] to spray packets\nacross CPU cores (our testbed NIC (\u00a74.1 ###reference_###)\nsupports hashing on L2 headers). This additional Ethernet header is\nnot needed in a sequencer instantiation running on the NIC." | |
| }, | |
| { | |
| "section_id": "3.3.2", | |
| "parent_section_id": "3.3", | |
| "section_name": "3.3.2 Hardware data structures for packet history", | |
| "text": "We show how to design data structures to maintain and update a recent\npacket history on two high-speed platforms, a Tofino programmable\nswitch pipeline [23 ###reference_b23###] and a Verilog module integrated into the\nNetFPGA-PLUS platform [12 ###reference_b12###]. These designs are specific\nto the platform where they are implemented, and hence we describe them\nindependently.\nA key unifying principle between the two designs is that although the\nitems in the maintained packet history change after each packet, we\nonly want to update a small part of the data structure for each\npacket. Conceptually, a ring buffer data structure is appropriate to\nmaintain such histories. Hence, in both designs, we use an index\npointer to refer to the current data item that must be updated,\nwhich corresponds to the head pointer of the abstract ring buffer\nwhere data is written.\nTofino. We use Tofino\nregisters [26 ###reference_b26###], which are stateful\nmemories to hold data on the switch, to record the bits of each\nhistoric packet relevant to the computation in the packet-processing\nprogram. Suppose the pipeline has match-action table stages, \nregisters per stage, and bits per register. For simplicity in this\ndescription, we assume there is exactly one packet field of size \nbits used in the computation in the packet-processing program. Our\ndata structure can maintain a maximum of \nbits of recent packet history, i.e. history for \npackets, as shown in Figure 4b ###reference_sf2###. We have successfully\ncompiled the design to the Tofino ASIC.\nFirst, we use a single register in the first stage to store the index\npointer. The pointer refers to the specific register in the subsequent\nstages that must be updated with a header field from the current\npacket. The index pointer is incremented by 1 for each packet, and\nwraps back to 0 when it reaches the maximum number of fields\nrequired in the history. The pointer is also carried on a metadata\nfield on the packet through the remaining pipeline stages.\nNext, register ALUs in subsequent stages are programmed to read out\nthe values stored in them into pre-designated metadata fields on the\npacket. If the index pointer points to this register, an additional\naction occurs: rewrite the stored contents of the register by the\npre-designated history fields from the current packet.\nFinally, all the metadata fields, consisting of the packet history\nfields and the index pointer, are deparsed and serialized into the\npacket in the format shown in Figure 4a ###reference_sf1###. We also add an\nadditional Ethernet header to ensure that the server NIC can receive\nthe packets correctly (\u00a73.3.1 ###reference_.SSS1###).\nRecent work explored the design of ring buffers to store packet\nhistories in the context of debugging [50 ###reference_b50###],\nreading out the histories from the control plane when a debugging\naction is triggered. A key difference in our design is that reading\nout histories into the packet is a data plane operation, occurring on\nevery packet.\nNetFPGA. To show the possibility of developing high-speed\nfixed-function hardware for sequencing, we also present a sequencer\ndesign developed in Verilog in Figure 4c ###reference_sf3###.\nSuppose we wish to maintain a history of packets, each packet\ncontributing bits of information. A simple design, for small\nvalues of and (we used and ), uses a memory\nwhich has rows, each containing a tuple of bits. We also\nmaintain a register containing the index pointer ( bits),\ninitialized to zero. At the beginning the memory is initialized with\nall zeroes. When a packet arrives, it is parsed to extract the bits\nrelevant to the packet history. Then the entire memory is read and put\nin front of the packet (moving the packet contents by a fixed size\nknown apriori, bits). The information relevant to the\npacket history from the current packet is put into the memory row\npointed to by the index pointer, and the index pointer is incremented\n(modulo the memory size). We have integrated this design into the\nNetFPGA-PLUS platform." | |
| }, | |
| { | |
| "section_id": "3.4", | |
| "parent_section_id": "3", | |
| "section_name": "Packet Loss and Nondeterminism", | |
| "text": "Handling Packet Loss. Packets can be lost\neither prior to the sequencer, after the sequencer but prior to\nprocessing at a CPU core, or after processing at a core. Among\nthese, we only care about the second kind of packet loss, since this\nis the only one that is problematic specifically for SCR,\nintroducing the possibility that flow states on the CPU cores might\nbecome inconsistent with each other.\nWe expect that SCR will be deployed in scenarios where the event of\npacket loss between the sequencer and CPU cores is rare. First, we\ndo not anticipate any packet loss in an instantiation where the\nsequencer is running entirely on a NIC, since the host interconnect\nbetween the NIC and CPU cores uses credit-based flow control and is\nlossless by design [31 ###reference_b31###, 56 ###reference_b56###]. In an instantiation where the sequencer is\nrunning on a top-of-the-rack switch, it is possible to run\nlink-level flow control mechanisms like PFC [6 ###reference_b6###]\n(as some large production networks do [46 ###reference_b46###]) to\nprevent packet loss between the switch and server\ncores. We discuss below how SCR can handle rare\npacket drop and corruption events while maintaining consistency\namong the states on CPU cores.\nHow should a CPU core that has lost a packet arriving from the\nsequencer synchronize itself to the correct flow state? There are\ntwo design options: the core can either explicitly read the full\nflow state from a more up-to-date core, or it can read the packet\nhistory from either the sequencer or a log written by a more\nup-to-date core, and then use the history to catch up its private\nstate (akin to Figure 3 ###reference_###). Since we operate in a regime where\npacket losses are rare, but the full set of flow states is large, we\nprefer to synchronize the packet history rather than the\nstate. Further, to simplify the overall design, we avoid explicit\ncoordination between the cores and the sequencer, synchronizing\nthe history among the cores only.\nOur objective is atomicity: any packet is either processed by\nall the cores or none of the cores. If a packet is sent by the\nsequencer to any core (in original or as part of the packet\nhistory), it should be processed in the correct order by all\nthe cores. To achieve this, we (i) have the sequencer attach an\nincrementing sequence number to each packet released by it; (ii) use\na per-core, lockless, single-writer multiple-reader log, into which\neach core writes the history contained in each packet it receives\n(including the relevant data for the original packet); and (iii)\nintroduce an algorithm to catch up the flow state on each core upon\ndetection of loss.\nThe algorithm proceeds as follows (more information is available in\nAlg.1 ###reference_### in App.B ###reference_###). Each CPU core\n maintains a per-core log with one entry for each sequence number\n. In a system with cores, the history metadata of the packet\nwith sequence (say ) will appear in packets with\nsequence numbers through ; conversely, a packet with\nsequence number contains \nwhere .\nFor core and sequence number ,\nInitially, , to denote\nthat the log entry for every sequence is uninitialized at every core\n. When a (fixed) core receives a packet with sequence number\n, it first detects packet loss by comparing the max sequence number\nit has seen so far (say ) with the earliest sequence number in\nthe new packet it receives (), assuming no reordering between\nthe sequencer and the core. Then, processes every sequence number\n in order of increasing , as\nfollows:\nif , i.e. sequence was lost between the sequencer\nand core , the core updates . For\nsuch packets, core will read from the logs of other cores in a loop, until discovers either that (i) \nis written in , in which case catches up its private\nstate by reading this history; or (ii) on all\ncores , concluding that sequence was never originally\nreceived on any core, and does not need to be recovered for\natomicity;\nif , i.e. sequence is successfully\nreceived at core (as part of the current packet), core \nupdates available in the packet,\nand then proceeds with regular processing as in\n\u00a73.2 ###reference_###.\nIn App.B ###reference_###, we formally prove that, under some\nmild assumptions, this algorithm always terminates in a state that is\neventually consistent across all CPU cores. Despite cores possibly\nwaiting on one another, there will be no deadlocks.\nHandling non-deterministic programs. Programs replicating computation across cores may diverge from each\nother due to two possible concerns. The first is the use of\ntimestamps in computations (e.g. to implement a token bucket rate\nlimiter). We avoid the use of timestamps measured\nlocally at CPU cores, in favor of measuring and attaching timestamps\nto the packet history at the sequencer, and having the\npacket-processing program use the attached timestamp instead. Modern NICs and\nprogrammable switches support high-resolution hardware\ntimestamping over packets [26 ###reference_b26###, 13 ###reference_b13###]. The second concern is the use of random\nnumbers in computations. We make program execution deterministic by\nfixing the seed of the pseudorandom number generator used across\ncores." | |
| }, | |
| { | |
| "section_id": "4", | |
| "parent_section_id": null, | |
| "section_name": "Evaluation", | |
| "text": "We seek to answer two main questions through the experiment setup\ndescribed in \u00a74.1 ###reference_###.\n(1) Does state-compute replication provide better multi-core\nscaling than existing techniques (\u00a74.2 ###reference_###)?\n(2) How practical is sequencer hardware\n(\u00a74.3 ###reference_###)?" | |
| }, | |
| { | |
| "section_id": "4.1", | |
| "parent_section_id": "4", | |
| "section_name": "Experiment Setup", | |
| "text": "Machines and configurations. Our experiment setup consists of\ntwo server machines connected back-to-back over a 100 Gbit/s\nNvidia/Mellanox ConnectX-5 NIC on each machine. Our servers run Intel\nIce Lake processors (Xeon Gold 6334) with 16 physical cores (32\nhyperthreads) and 256 GB DDR4 physical memory spread over two NUMA\nnodes. The system bus is PCIe 4.0 16x. We run Ubuntu 22 with Linux\nkernel v6.5. One machine serves as a packet replayer/generator,\nrunning a DPDK burst-replay program which can transmit packets from a\nprovided traffic trace. We have tested that the traffic generator can\nreplay large traces (1 million packets) at speeds of 120\nmillion packets/second (Mpps), for sufficiently small packets (so that\nthe NIC bandwidth isn\u2019t saturated first). The traffic generator can be\ndirected to transmit packets at a fixed transmission (TX) rate and\nmeasure the corresponding received (RX) packet rate. Our second server\nis the Device Under Test (DUT), which runs on identical hardware and\noperating system as the first server. We implement standard\nconfigurations to benchmark high-speed packet\nprocessing [47 ###reference_b47###]: hyperthreading is disabled; the\nprocessor C-states, DVFS, and TurboBoost are disabled; dynamic IRQ\nbalancing is disabled; and the clock frequency is set to a fixed 3.6\nGHz. We enable PCIe descriptor compression and use 256 \nPCIe descriptors. Receive-side scaling (RSS [4 ###reference_b4###]) is configured\naccording to the baselines/programs, see\nTable 1 ###reference_###. We use a single receive queue (RXQ)\nper core unless specified otherwise.\nThe definition of throughput. We use the standard maximum\nloss-free forwarding rate (MLFFR [5 ###reference_b5###]) methodology to\nbenchmark packet-processing throughput. Our threshold for packet loss\nis in fact larger than zero (we count loss as \u201closs-free\u201d),\nsince, at high speeds we have observed that the software typically\nalways incurs a small amount of bursty packet loss. We use binary\nsearch to expedite the search for the MLFFR, stopping the search when\nthe bounds of the search interval are separated by less than 0.4\nMpps. Experimentally, we\nobserve that MLFFR is a stable throughput metric: we get highly\nrepeatable results across multiple runs. We only report throughput\nfrom a single run of the MLFFR binary search.\nTraces. We are interested in understanding whether SCR provides better multi-core scaling than existing techniques on\nrealistic traffic workloads. We have set up and used three traces for\nthroughput comparison: a university data center\ntrace [35 ###reference_b35###], a wide-area Internet\nbackbone trace from CAIDA [11 ###reference_b11###], and a synthetic trace with\nflows whose sizes and inter-arrivals were sampled from a hyperscalar\u2019s\ndata center flow characteristics [32 ###reference_b32###]. These traces\nare highly dynamic, with flow states being created and destroyed\nthroughout\u2014an aspect that we believe is crucial to handle in real\ndeployment environments (i.e. the programs are not simply processing a\nstable set of active flows). Further, we ensure that all TCP flows\nthat begin in the trace also end, by setting TCP SYN and FIN flags for\nthe first and last packets (resp.) of each flow in the trace. This\nallows the trace to be replayed multiple times with the correct\nprogram semantics. The flow size distributions of the traces are shown\nin Figure 5 ###reference_###.\n###figure_10### ###figure_11### ###figure_12### The eBPF framework limits our implementations in terms of the number\nof concurrent flows that our stateful data structures can\ninclude. This is not a limitation of the techniques, but an\nartifact of the current packet-processing framework we\nuse (eBPF/XDP). To account for this limitation, specifically for the\nCAIDA trace, we have sampled flows from the trace\u2019s empirical flow\nsize distribution to faithfully reflect the underlying distribution,\nwithout over-running the limit on the number of concurrent flows that\nany of our baseline programs may hold across the lifetime of the\nexperiment.\nBaselines. We compare state-compute replication against (i)\nstate sharing, an approach that uses hardware atomic instructions\n(when the stateful update is simple enough) or eBPF\nspinlocks [10 ###reference_b10###] (when it is not) to share state across\nCPU cores; (ii) state sharding using classic RSS; and (iii) sharding\nusing RSS++ [34 ###reference_b34###, 59 ###reference_b59###],\nthe state-of-the-art flow sharding technique to balance CPU\nload. RSS++ solves an optimization problem that takes as input the\nincoming load imposed by flow shards, and migrates shards to minimize\na linear combination of load imbalance across CPU cores and the number\nof cross-core shard transfers needed.\nBoth SCR and state sharing spray packets evenly across CPU cores. The\npackets sent to each core for the sharding techniques depends on the\nconfiguration of RSS, which varies across the programs we\nevaluated (see below and Table 1 ###reference_###). Running\nRSS++ over eBPF/XDP requires patching the NIC\ndriver [9 ###reference_b9###]. Unless specified otherwise, we run SCR without loss recovery\n(\u00a73.4 ###reference_###); \nwe evaluate loss recovery separately (\u00a74.2 ###reference_###).\nPrograms. We tested five packet-processing programs\ndeveloped in eBPF/XDP, including (i) a heavy hitter monitor, (ii) DDoS\nmitigator, (iii) TCP connection state tracker, (iv) port-knocking\nfirewall, and (v) a token bucket policer.\nTable 1 ###reference_### summarizes these programs. Each\nprogram maintains state across packets in the form of a key-value\ndictionary, whose size and contents are listed in the table. We\ndeveloped a cuckoo hash table to implement the functionality of this\ndictionary with a single BPF helper call [15 ###reference_b15###].\nThe packet fields in the key determine how RSS must be configured:\npackets having the same key fields must be sent to the same CPU\ncore. However, today\u2019s NICs do not allow RSS to steer packets on\narbitrary sets of packet\nfields [59 ###reference_b59###]. We pre-process the\ntrace to ensure that RSS hashing indeed shards the flow state\ncorrectly. \nFor the connection tracker, since both directions of the\nconnection must go to the same CPU core, we use the keyed hash\nfunction prescribed by symmetric RSS [70 ###reference_b70###]." | |
| }, | |
| { | |
| "section_id": "4.2", | |
| "parent_section_id": "4", | |
| "section_name": "Multi-Core Throughput Scaling", | |
| "text": "In this section, we compare the MLFFR throughput\n(\u00a74.1 ###reference_###) of several packet-processing programs\n(Table 1 ###reference_###) scaled across multiple cores using\nfour techniques: SCR (\u00a73 ###reference_###), state sharing with\npackets sprayed evenly across all cores, sharding using RSS, and\nsharing using RSS++\n(\u00a72 ###reference_###). Since the TCP connection tracking\nprogram requires packets from the two directions of the connection\nto be aligned, we evaluated it on a synthetic but\nrealistic hyperscalar data center trace\n(\u00a74.1 ###reference_###). For the rest of the programs, we\nreport results from real university data center and Internet backbone\ntraces.\n###figure_13### We have ensured that these experiments reflect a fair comparison of\nCPU packet-processing efficacy. First, we truncated the packets in the\ntraces to a size smaller than the full MTU, to stress CPU performance\nwith a high packets/second (Mpps) workload\n(\u00a73.1 ###reference_###). Further, we fix the packet sizes used across\nall baselines for a given program. We used a fixed packet size of 256 bytes for\nthe connection tracker and 192 bytes for the others. The packet size\nlimits the number of items of history metadata that can be piggybacked\non each packet. Since the metadata size changes by the program\n(Table 1 ###reference_###), the maximum number of cores we\ncan support for a fixed packet size also varies by program (we\nsupport 7 cores for the token bucket, heavy hitter detector, and\nconnection tracker, and 14 for the DDoS mitigation and port-knocking\nfirewall).\nThroughput results.\nFigure 6 ###reference_### and Figure 7 ###reference_### show the\nthroughput as we increase the number\nof packet-processing cores. SCR is the only multi-core scaling\ntechnique that can monotonically scale the throughput of all the stateful\npacket-processing programs we evaluated across multiple cores,\nregardless of the flow size distribution (\u00a72.3 ###reference_###). The throughput\nfor SCR increases\nlinearly across cores in all of the configurations we tested. Somewhat\nsurprisingly, SCR provides even better absolute performance than\nhardware atomic instructions in the case of the heavy hitter\nand DDoS mitigation programs. However, the performance of lock-based\nsharing falls off catastrophically with 3 or more cores.\n###figure_14### The throughput of sharding using RSS depends on the vagaries of how\nthe RSS hash function steers flows to cores. RSS can neither split a\nsingle elephant flow, nor does it intelligently redistribute elephant\nflows to balance load across CPU cores. RSS++ indeed redistributes\nflows across cores to adapt to high workload on some cores. Such\nredistribution can sometimes confer benefits over RSS (for example,\nsee Figure 6 ###reference_### (e) and (h)). However, fundamentally,\nflow-based sharding is ineffective when the workload is highly skewed\nwith elephant flows, as many real traffic workloads are\n(Figure 5 ###reference_###). Moreover, our results show that\nRSS++ is not always better than simple RSS. Re-balancing load by\nmigrating a flow shard across cores requires bouncing the cache\nline(s) containing the flow states across cores, an action that can\nimpede high performance if done too frequently. On the other hand,\nrebalancing infrequently may reintroduce skew across cores, since the\nfuture load from flow shards may drift from their current load\n(the basis for load rebalancing in RSS++).\nWhy does SCR scale better than the other techniques?\nFigure 8 ###reference_### shows detailed performance metrics measured\nfrom Intel\u2019s performance counter monitor (PCM [24 ###reference_b24###]) and\nBPF profiling [14 ###reference_b14###]. We measure the L2 cache hit\nratios, instructions retired per cycle (IPC), and the program\u2019s\ncomputation latency (only the XDP portion, excluding the dispatch functionality in the driver), as the load offered to the system\nincreases, when our token bucket policer is run across different\nnumbers of cores (2, 4, or 7). The numbers show the averages for these\nmetrics across the cores running the program. Error bars for IPC show\nthe min and max values across cores.\n###figure_15### Lock-based sharing in general suffers from lower L2 cache hit ratios\n((a)\u2013(c)) and higher latencies ((g)\u2013(i)) due to lock and cache line\ncontention across cores\u2014a trend that holds as the offered load\nincreases and also with additional cores at the same offered load. As\nwe might expect, IPC increases with the offered load ((d)\u2013(f)), since\nthe cores get busier with packet processing. While the sharding\napproaches (RSS and RSS++) effectively use the CPU with a high average\nIPC for 2 cores, their average IPC values drop significantly with\nadditional cores, with large variation across cores (see error bars).\nThis indicates a severe imbalance of useful CPU work across cores:\nFlow-affinity-based sharding approaches are unable to balance packet\nprocessing load effectively across cores, leaving some cores idle and\nothers heavily used. In contrast, SCR has a consistently high IPC\nwith more cores and higher offered loads. SCR has higher\npacket-processing latency ((g)\u2013(i)) than RSS-sharding since it needs\nto process the history for each packet. However, its more effective usage of the CPU cores\nresults in better throughput (Figure 6 ###reference_### (g)).\n###figure_16### ###figure_17### ###figure_18### Limits to SCR scaling.\nSCR suffers from two kinds of scaling limitations.\nFirst, as discussed in \u00a73.1 ###reference_###, as the compute latency\nincreases in comparison to the dispatch latency, the effectiveness of\nSCR\u2019s multi-core scaling reduces.\n\nWe evaluate how the throughput of a stateless program varies as the\ncompute latency of this program increases, shown both in\npackets/second (Figure 9a ###reference_sf1###,\nFigure 9b ###reference_sf2###) and\nnormalized to the single-core throughput for that compute latency\n(Figure 9c ###reference_sf3###). With a small\ncompute latency (left of the graph), using cores provides\n throughput relative to a single core, but this\nrelative benefit diminishes with increasing compute latency.\nSecond, SCR\u2019s attachment of histories to packets incurs\nnon-negligible byte/second overheads.\nAdding to the number of bytes per packet increases L3 cache pressure\ndue to higher DDIO cache occupancy [22 ###reference_b22###] and incurs\nadditional PCIe transactions and\nbandwidth [56 ###reference_b56###]. Further, when packet histories\nare appended outside the NIC (e.g. top-of-the-rack switch), SCR may\nsaturate the NIC earlier than other approaches. We compare SCR against the shared and sharded approaches, when SCR alone adds history metadata before packets are fed into the NIC while\nthe packets for the latter approaches are truncated to 64 bytes.\nFigure 10 ###reference_### (a) shows the throughput\nof the token bucket program with the university data center\ntrace.\n After 11 cores, the CPU is no longer the bottleneck for SCR. This prevents\nSCR from scaling to a higher packets/second throughput. Yet, SCR saturates at a throughput much higher than the other techniques.\n###figure_19### Overheads of SCR\u2019s loss recovery handling. We\nevaluate how SCR\u2019s loss recovery algorithm impacts throughput with\nand without packet loss. Figure 10 ###reference_### (b)\ncompares a version of SCR without incorporating the loss recovery\nalgorithm, against a version that incorporates loss recovery at\ndifferent artificially-injected random packet losses (0%, 0.01%,\n0.1% and 1%). We also show the performance of existing scaling\ntechniques (shared state, RSS, RSS++). The mere inclusion of the\nloss recovery algorithm impacts performance due to the additional\nlogging operations. Moreover, SCR\u2019s throughput degrades with\nhigher loss rate due to recovery-related synchronization\n(\u00a73.4 ###reference_###). However, SCR still outperforms\nand outscales existing multi-core scaling techniques." | |
| }, | |
| { | |
| "section_id": "4.3", | |
| "parent_section_id": "4", | |
| "section_name": "Practicality of Sequencer Hardware", | |
| "text": "We integrated our Verilog module implementing the sequencer\n(\u00a73.3.2 ###reference_.SSS2###) into the\nNetFPGA-PLUS [12 ###reference_b12###] reference switch, which is clocked at\n250 MHz with a data bus width of 1024 bits. We use the Alveo U250\nboard, which contains 1728000 lookup tables (LUTs) and 3456000\nflip-flops.\nWe synthesized our sequencer design with different numbers of memory\nrows (\u00a73.3.2 ###reference_.SSS2###), corresponding to the size\nof the packet history (in number of packets).\nEach row is 112 bits long, enough to maintain a TCP 4-tuple and an\nadditional 16-bit value (e.g. a counter, timestamp, etc.) for each\nhistoric packet. Table 2 ###reference_### shows the resource usage. Our design\nmeets timing at 250 MHz, implying an achievable bandwidth of more than\n200 Gbit/s. If each packet history metadata in the program is smaller\nthan a row (112 bits), our design can meet timing\nwhile scaling to 128 cores. The LUT and flip-flop hardware usage is\nnegligible compared to the FPGA capacity. We believe that our sequencer design may be\nsimple and cheap enough to be added as an on-chip accelerator\nto a future NIC.\nWe have also implemented a stateful-register-based design of\nthe sequencer on the Tofino programmable switch\n(\u00a73.3.2 ###reference_.SSS2###). The results are shown in\nTable 3 ###reference_###. Our implementation was designed to\nuse as many stateful registers and ALUs as possible (our design uses\n93% on average across stages) to hold the largest number of bits of\npacket history. Our design holds 44 32-bit fields, sufficient to\nparallelize (Table 1 ###reference_###) the DDoS mitigator\nover 44 cores, the port-knocking firewall over 22 cores, the heavy\nhitter and token bucket over 9 cores, or the connection tracker over\n5 cores. The small number of stateful ALUs on the platform, as well\nas the limit on the number of bits that can be read out from stateful\nmemory into packet fields, restrict the Tofino sequencer from scaling\nto a larger number of CPU cores." | |
| }, | |
| { | |
| "section_id": "5", | |
| "parent_section_id": null, | |
| "section_name": "Related Work", | |
| "text": "Frameworks for network function performance. The problem of\nscaling out packet processing is prominent in network function\nvirtualization (NFV), with frameworks such as\nsplit/merge [64 ###reference_b64###], openNF [44 ###reference_b44###],\nand Metron [51 ###reference_b51###] enabling elastic scaling. There have\nalso been efforts to parallelize network functions\nautomatically [59 ###reference_b59###] and designing\ndata structures to minimize cross-core\ncontention [45 ###reference_b45###]. These efforts are flow-oriented,\nmanaging and distributing state at flow granularity. In\ncontrast, SCR scales packet processing for a single\nflow.\nGeneral techniques for software parallelism. Among the\ncanonical frameworks to implement software\nparallelism [54 ###reference_b54###], our scaling principles\nare most reminiscent of Single Program Multiple Data (SPMD)\nparallelism, with the program being identical on each core but the\ndata being distinct. The sequencer in SCR makes the data\ndistinct for each core.\nParallelizing finite state machines. A natural model of\nstateful packet processing programs is as finite state automata (the\nstate space is the set of flow states) making transitions on events\n(packets). There have been significant efforts taken to parallelize\nFSM execution using\nspeculation [62 ###reference_b62###, 61 ###reference_b61###] and data\nparallelism [55 ###reference_b55###]. In contrast, SCR exploits replication.\nParallel network software stacks. There has been recent\ninterest in abstractions and implementations that take advantage of\nparallelism in network stacks, for TCP [68 ###reference_b68###, 52 ###reference_b52###] and for end-to-end data transfers to/from user\nspace [38 ###reference_b38###]. SCR takes a complementary\napproach, using replication rather than decomposing the program into\nsmaller parallelizable computations." | |
| }, | |
| { | |
| "section_id": "6", | |
| "parent_section_id": null, | |
| "section_name": "Conclusion", | |
| "text": "It is now more crucial than ever to investigate techniques to scale\npacket processing using multiple cores. This paper presented\nstate-compute replication (SCR), a principle that enables scaling the\nthroughput of stateful packet-processing programs monotonically across\ncores by leveraging a packet history sequencer, even under realistic\nskewed packet traffic." | |
| } | |
| ], | |
| "appendix": [ | |
| { | |
| "section_id": "Appendix 1", | |
| "parent_section_id": null, | |
| "section_name": "Appendix A Throughput Model", | |
| "text": "This section provides the detailed description of the\nmodel to predict the throughput of SCR (outlined\nin \u00a73.2 ###reference_###) and evaluates whether the actual\nthroughput matches the model.\nSuppose a system has cores, and each core can dispatch a single packet\nin cycles, run a packet-processing program that computes over\na single packet in cycles, where \nis the time for processing the current packet and is the time for\nstate transition using one metadata. is smaller than , as\nstate-transition code is a code snippet extracted from the program which\nprocesses the current packet. For each piggybacked packet, the\ntotal processing time is , where .\nWhen dominates state-computation time (i.e. ), with \ncores, the total rate at which externally-arriving packets can be\nprocessed is . Table 4 ###reference_### lists the parameters we measured\nfor packet-processing applications we evaluated.\nIt shows that is 4.3 \u2013 9.7 times .\nHence, it is possible to scale the packet-processing rate linearly\nwith the number of cores .\nWe applied the parameters in Table 4 ###reference_###\nto the throughput model and compared the predicted throughput to the\nthe actual throughput. Figure 11 ###reference_### shows they match well.\n###figure_20### ###figure_21### ###figure_22### ###figure_23### ###figure_24###" | |
| }, | |
| { | |
| "section_id": "Appendix 2", | |
| "parent_section_id": null, | |
| "section_name": "Appendix B Loss Recovery Algorithm", | |
| "text": "This section provides the detailed pseudocode and a proof\nof correctness of the packet loss recovery\nalgorithm outlined in \u00a73.4 ###reference_###.\nBefore we start the correctness proof of loss recovery algorithm,\nwe define the following notations.\n: an SCR packet. : the SCR packet sent\nfrom the sequencer to a core (please refer to \u00a73.3.1 ###reference_.SSS1###\nfor more details).\n: a regular packet. : the regular packet\nreceived by the sequencer (in original or as part of the packet\nhistory, please refer to \u00a73.3.1 ###reference_.SSS1### for more details).\n: the collection of all cores.\nWe want to prove every core will not be deadlocked by loss recovery,\ni.e. every core will start processing (execute line 6)\nand finish processing (finish executing line 6-12) every regular packet,\nunder the conditions (1) each core will receive at least one SCR packet\nafter packet loss, (2) we have infinite memory, and (3) packet number\nmonotonically increases.\nFor any regular packet , if every core has received (),\nevery core will finish processing .\nWe will firstly prove any core will process to in order.\nThe order of packets to process follows the order at line 5\n(from to ), i.e., after a core finishing\nprocessing , it will start processing .\nEach core will be triggered to process all packets including to ,\nsince each core has received (),\nGiven the order of packets to process, we now prove all cores can finish\nprocessing .\nIf , all cores will start processing (line 6) and then finish\nprocessing (Lemma 1 ###reference_ma1###).\nIf , according to the order of a single core processing\npackets and Lemma 1 ###reference_ma1###, we get the induction hypothesis\nthat if all cores have started processing , all cores will\nfinish processing and start processing .\nUsing induction hypothesis for times, all cores will\nstart and then finish processing .\n\u220e\nFor any regular packet , if all cores have started processing ,\nthen all cores will finish processing .\nFor any core , no matter is lost or received,\n will finish processing it.\nIf is received by , after updates \nin its log (line 10-11), finishes processing .\nIf is lost at (detected at line 6), will wait for other cores to\nupdate in their logs until gets or confirm is\nlost at all of other cores (line 19-33). will not be deadlocked in waiting,\nsince will be updated to or in the logs of\nall cores who have started processing (if line 6 is executed for ,\nline 7 or line 11 will be executed).\n\u220e\nNote that logs are finite and sequence\nnumbers wrap around in real system, but these can be handled with a\nsufficiently large log and sequence space, and we use values 1,024 and\n842,185 in our current implementation." | |
| }, | |
| { | |
| "section_id": "Appendix 3", | |
| "parent_section_id": null, | |
| "section_name": "Appendix C SCR-Aware Multi-Core Programming", | |
| "text": "###figure_25### Consider a packet-processing program developed assuming\nsingle-threaded execution on a single CPU core. The question we tackle\nin this subsection is: how should the program be changed to take\nadvantage of multi-core scaling with state-compute replication? We\nwalk through the process of adapting a program written in the eBPF/XDP\nframework [47 ###reference_b47###], but we believe it is conceptually\nsimilar to adapt programs written in other frameworks such as DPDK.\nWe describe the program transformations necessary for SCR through a running\nexample. Suppose we have a port-knocking\nfirewall [28 ###reference_b28###] with the state machine shown in\nFigure 12 ###reference_###. The program runs a copy of this\nstate machine per source IP address. If a source transmits IPv4/TCP\npackets with the correct sequence of TCP destination ports, then all\nfurther communication is permitted from that source. All other packets\nare dropped. Any transition not shown in the figure leads to the\ndefault CLOSED_1 state, and only the OPEN state permits\npackets to traverse the firewall successfully. A simplified XDP\nimplementation of this single-threaded firewall is shown below.\nThe program\u2019s state is a key-value dictionary mapping source IP\naddresses to an automaton state described in\nFigure 12 ###reference_###. The function get_new_state\nimplements the state transitions. The main function, simple_port_knocking first parses the input packet, dropping\nall packets other than IPv4/TCP packets. Then the program fetches the\nrecorded state corresponding to the source IP on the packet, and\nperforms the state transition corresponding to the TCP destination\nport. If the final state is OPEN, all subsequent packets of that\nsource IP may traverse the firewall to the other side. All other\npackets are dropped.\nTo enable this program to use state-compute replication across cores,\nthis program should be transformed in the following ways. We believe\nthat these transformations may be automated by developing suitable\ncompiler passes, but we have not yet developed such a compiler.\n(1) Define per-core state data structures and\nper-packet metadata structures. First, the program\u2019s state must be\nreplicated across cores. To achieve this, we must define per-core\nstate data structures that are identical to the global state data\nstructures, except that they are not shared among CPU\ncores. Packet-processing frameworks provide APIs to define such\nper-core data structures [16 ###reference_b16###].\nAdditionally, we must define a per-packet metadata structure that\nincludes any part of the packet that is used by the program\u2014through\neither control or data flow\u2014to update the state corresponding to\nthat packet. For the port-knocking firewall, the per-packet metadata\nshould include the l3proto, l4proto, srcip, and dport.\nThe data structures that maintain packet history on the sequencer\ncorrespond to this per-packet metadata (\u00a73.3 ###reference_###).\n(2) Fast-forward the state machine using the packet\nhistory. The SCR-aware program must prepend a loop to \u201ccatch up\u201d\nthe state machine for each packet missed by the CPU core where the\ncurrent packet is being processed. By leveraging the recent history\npiggybacked on each packet, at the end of this loop, the CPU core has\nthe most up-to-the-packet state.\nA few salient points about the code fragment above. First, the\nsemantics of the ring buffer of packet history\n(\u00a73.3 ###reference_###) are implemented by looping over the packet\nhistory metadata starting at offset index rather than at offset\n0. The decision to implement the ring buffer semantics in software\nmakes the hardware significantly easier to design, since only a small\npart of the hardware data structure needs to be updated for each\npacket (\u00a73.3.2 ###reference_.SSS2###). Second, the loop must\nimplement appropriate control flow before the state update to ensure\nthat only packets that should indeed update the flow state do. Note\nthat the metadata includes parts of the packet that are not only the\ndata dependencies for the state transition (srcip, dport) but\nalso the control dependencies (l3proto, l4proto). Third, no\npacket verdicts are given out for packets in the history: we want the\nprogram to return a judgment for the \u201ccurrent\u201d packet, not the\nhistoric packets used merely to fast-forward the state\nmachines. Finally, the code fragment conveniently adjusts pkt_start to the position in the packet buffer\n(Figure 4a ###reference_sf1###) corresponding to where the \u201coriginal\u201d packet\nbegins. The rest of the original program\u2014unmodified\u2014may process\nthis packet to completion and assign a verdict.\nWhat is excluded in our code transformations is also crucial. This\nprogram avoids locking and explicit synchronization, despite the fact\nthat it runs on many cores, even if there is global state maintained\nacross all packets.\nWith these transformations, in principle, a packet-processing program\nis able to scale its performance using state-compute replication\nacross multiple cores." | |
| } | |
| ], | |
| "tables": { | |
| "1": { | |
| "table_html": "<figure class=\"ltx_table ltx_align_center\" id=\"S4.T1\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T1.1\" style=\"width:505.9pt;height:88.8pt;vertical-align:-0.7pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-108.8pt,19.0pt) scale(0.699198128273536,0.699198128273536) ;\">\n<table class=\"ltx_tabular ltx_align_middle\" id=\"S4.T1.1.1\">\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.1.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.1.1\">Program</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T1.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.2.1\">State</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.3.1\">Metadata size</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.4.1\">RSS hash</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.5.1\">Packet traces</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.6.1\">Atomic HW</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.1.1.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.1.1.7.1\">Lines of code</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.2.2\">\n<td class=\"ltx_td ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.2.2.1\"></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.2.1\">Key</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.3.1\">Value</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.4.1\">(bytes/packet)</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.5.1\">fields</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.6.1\">evaluated</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.7\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.7.1\">vs. Locks</span></td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.2.2.8\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T1.1.1.2.2.8.1\">(shard/RSS)</span></td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.3.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.1\">DDoS mitigator</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.2\">source IP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.3\">count</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.4\">4</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.5\">src & dst IP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.6\">CAIDA, Univ DC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.7\">Atomic HW</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T1.1.1.3.3.8\">168</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.4.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.4.4.1\">Heavy hitter monitor</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.2\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.3\">flow size</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.4\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.5\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.6\">CAIDA, Univ DC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.7\">Atomic HW</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.4.4.8\">141</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.5.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.5.5.1\">TCP connection state tracking</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.2\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.3\">TCP state, timestamp, seq #</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.4\">30</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.5\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.6\">Hyperscalar DC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.7\">Locks</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.5.5.8\">1029</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.6.6\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.6.6.1\">Token bucket policer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.2\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.3\">last packet timestamp, # tokens</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.4\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.5\">5-tuple</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.6\">CAIDA, UnivDC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.7\">Locks</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T1.1.1.6.6.8\">169</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T1.1.1.7.7\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T1.1.1.7.7.1\">Port-knocking firewall</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.2\">source IP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.3\">knocking state (<em class=\"ltx_emph ltx_font_italic\" id=\"S4.T1.1.1.7.7.3.1\">e.g.</em>\u00a0<span class=\"ltx_text ltx_font_typewriter\" id=\"S4.T1.1.1.7.7.3.2\" style=\"font-size:90%;\">OPEN</span>)</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.4\">8</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.5\">src & dst IP</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.6\">CAIDA, UnivDC</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.7\">Locks</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T1.1.1.7.7.8\">123</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 1: </span>The packet-processing programs we evaluated.</figcaption>\n</figure>", | |
| "capture": "Table 1: The packet-processing programs we evaluated." | |
| }, | |
| "2": { | |
| "table_html": "<figure class=\"ltx_table ltx_align_center\" id=\"S4.T2\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T2.1\" style=\"width:192.2pt;height:93.2pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-15.2pt,7.4pt) scale(0.863139819552049,0.863139819552049) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T2.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.1.1\">Rows</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" colspan=\"3\" id=\"S4.T2.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.2.1\">LUT</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" colspan=\"2\" id=\"S4.T2.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.1.1.3.1\">Flip-flops</span></th>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.2.2\">\n<th class=\"ltx_td ltx_th ltx_th_column ltx_border_l ltx_border_r\" id=\"S4.T2.1.1.2.2.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.2.2.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.2.1\">Usage</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.2.2.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.3.1\">Logic</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.2.2.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.4.1\">%</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.2.2.5\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.5.1\">Usage</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r\" id=\"S4.T2.1.1.2.2.6\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T2.1.1.2.2.6.1\">%</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.3.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.1\">16</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.2\">1045</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.3\">646</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.4\">0.060</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.5\">2369</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T2.1.1.3.1.6\">0.069</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.4.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.1.4.2.1\">32</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.4.2.2\">1852</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.4.2.3\">1444</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.4.2.4\">0.107</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.4.2.5\">3158</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.4.2.6\">0.091</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.5.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T2.1.1.5.3.1\">64</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.5.3.2\">2637</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.5.3.3\">2229</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.5.3.4\">0.153</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.5.3.5\">4707</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T2.1.1.5.3.6\">0.136</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T2.1.1.6.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T2.1.1.6.4.1\">128</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.1.6.4.2\">3390</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.1.6.4.3\">2982</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.1.6.4.4\">0.196</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.1.6.4.5\">7786</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T2.1.1.6.4.6\">0.226</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 2: </span>Sequencer resource usage after synthesis into the\nNetFPGA-PLUS reference switch and meeting timing at 250 MHz.</figcaption>\n</figure>", | |
| "capture": "Table 2: Sequencer resource usage after synthesis into the\nNetFPGA-PLUS reference switch and meeting timing at 250 MHz." | |
| }, | |
| "3": { | |
| "table_html": "<figure class=\"ltx_table ltx_align_center\" id=\"S4.T3\">\n<div class=\"ltx_inline-block ltx_transformed_outer\" id=\"S4.T3.1\" style=\"width:202.4pt;height:69.3pt;vertical-align:-0.0pt;\"><span class=\"ltx_transformed_inner\" style=\"transform:translate(-30.2pt,10.3pt) scale(0.770377812998155,0.770377812998155) ;\">\n<table class=\"ltx_tabular ltx_guessed_headers ltx_align_middle\" id=\"S4.T3.1.1\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.1.1\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1.1\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.1.1\">Resource</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1.2\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.2.1\">Avg%</span></th>\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1.3\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.3.1\">Resource</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.1.1.4\"><span class=\"ltx_text ltx_font_bold\" id=\"S4.T3.1.1.1.1.4.1\">Avg%</span></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.2.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.2.1.1\">Exact match crossbars</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.2.1.2\">23.31%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.2.1.3\">SRAM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"S4.T3.1.1.2.1.4\">9.69%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.3.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T3.1.1.3.2.1\">VLIW instructions</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.1.3.2.2\">9.11%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.1.1.3.2.3\">TCAM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.1.3.2.4\">0.00%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.4.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"S4.T3.1.1.4.3.1\">Stateful ALUs</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.1.4.3.2\">93.75%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_r\" id=\"S4.T3.1.1.4.3.3\">Map RAM</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"S4.T3.1.1.4.3.4\">15.62%</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"S4.T3.1.1.5.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"S4.T3.1.1.5.4.1\">Logical tables</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.1.5.4.2\">23.96%</td>\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_r\" id=\"S4.T3.1.1.5.4.3\">Gateway</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"S4.T3.1.1.5.4.4\">23.44%</td>\n</tr>\n</tbody>\n</table>\n</span></div>\n<figcaption class=\"ltx_caption\"><span class=\"ltx_tag ltx_tag_table\">Table 3: </span>Resource usage (average % across stages) of a Tofino\nimplementation of the sequencer that uses as many stateful ALUs\nas possible to store packet history, amounting to 44 32-bit\nfields.</figcaption>\n</figure>", | |
| "capture": "Table 3: Resource usage (average % across stages) of a Tofino\nimplementation of the sequencer that uses as many stateful ALUs\nas possible to store packet history, amounting to 44 32-bit\nfields." | |
| }, | |
| "4": { | |
| "table_html": "<figure class=\"ltx_table\" id=\"A1.T4\">\n<table class=\"ltx_tabular ltx_centering ltx_guessed_headers ltx_align_middle\" id=\"A1.T4.4\">\n<thead class=\"ltx_thead\">\n<tr class=\"ltx_tr\" id=\"A1.T4.4.4\">\n<th class=\"ltx_td ltx_align_left ltx_th ltx_th_column ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T4.4.4.5\"><span class=\"ltx_text ltx_font_bold\" id=\"A1.T4.4.4.5.1\">Application</span></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A1.T4.1.1.1\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A1.T4.2.2.2\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A1.T4.3.3.3\"></th>\n<th class=\"ltx_td ltx_align_center ltx_th ltx_th_column ltx_border_r ltx_border_t\" id=\"A1.T4.4.4.4\"></th>\n</tr>\n</thead>\n<tbody class=\"ltx_tbody\">\n<tr class=\"ltx_tr\" id=\"A1.T4.4.5.1\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r ltx_border_t\" id=\"A1.T4.4.5.1.1\">DDoS mitigator</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T4.4.5.1.2\">114</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T4.4.5.1.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T4.4.5.1.4\">104</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r ltx_border_t\" id=\"A1.T4.4.5.1.5\">10</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.4.6.2\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"A1.T4.4.6.2.1\">Heavy hitter monitor</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.6.2.2\">145</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.6.2.3\">15</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.6.2.4\">110</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.6.2.5\">35</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.4.7.3\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"A1.T4.4.7.3.1\">Token bucket policer</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.7.3.2\">156</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.7.3.3\">21</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.7.3.4\">104</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.7.3.5\">53</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.4.8.4\">\n<td class=\"ltx_td ltx_align_left ltx_border_l ltx_border_r\" id=\"A1.T4.4.8.4.1\">Port-knocking firewall</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.8.4.2\">107</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.8.4.3\">18</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.8.4.4\">97</td>\n<td class=\"ltx_td ltx_align_center ltx_border_r\" id=\"A1.T4.4.8.4.5\">11</td>\n</tr>\n<tr class=\"ltx_tr\" id=\"A1.T4.4.9.5\">\n<td class=\"ltx_td ltx_align_left ltx_border_b ltx_border_l ltx_border_r\" id=\"A1.T4.4.9.5.1\">TCP connection tracking</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T4.4.9.5.2\">152</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T4.4.9.5.3\">35</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T4.4.9.5.4\">80</td>\n<td class=\"ltx_td ltx_align_center ltx_border_b ltx_border_r\" id=\"A1.T4.4.9.5.5\">73</td>\n</tr>\n</tbody>\n</table>\n<figcaption class=\"ltx_caption ltx_centering\"><span class=\"ltx_tag ltx_tag_table\">Table 4: </span>The throughput model parameters (in nanoseconds) for packet-processing applications we evaluated.</figcaption>\n</figure>", | |
| "capture": "Table 4: The throughput model parameters (in nanoseconds) for packet-processing applications we evaluated." | |
| } | |
| }, | |
| "image_paths": { | |
| "1": { | |
| "figure_path": "2309.14647v2_figure_1.png", | |
| "caption": "Figure 1: Scaling the throughput of a TCP connection state tracker\nfor a single TCP connection across multiple cores. Sharing\nstate across cores degrades performance beyond 2 cores due to\ncontention. Sharding state (using RSS and\nRSS++ [34]) cannot improve throughput beyond a\nsingle CPU core (\u00a72). In contrast,\nState-Compute Replication (\u00a73) provides linear scale-up\nin throughput with cores.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x1.png" | |
| }, | |
| "2(a)": { | |
| "figure_path": "2309.14647v2_figure_2(a).png", | |
| "caption": "(a) Packets/second\nFigure 2: The nature of CPU work in high-speed packet processing:\nConsider the throughput of a simple packet forwarding\napplication (packets/second (a), bits/second (b)) running on a\nsingle CPU core clocked at 3.6 GHz, as the size of the incoming\npackets varies. The average latency to execute the XDP program\nis also shown in nanoseconds (c). CPU usage is tied to the\nnumber of packets (not bits) processed per second. Further,\nsignificant time elapses in dispatch:\nCPU work to present the input packet to and retrieve the\noutput packet from the program computation.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x2.png" | |
| }, | |
| "2(b)": { | |
| "figure_path": "2309.14647v2_figure_2(b).png", | |
| "caption": "(b) Bits/second\nFigure 2: The nature of CPU work in high-speed packet processing:\nConsider the throughput of a simple packet forwarding\napplication (packets/second (a), bits/second (b)) running on a\nsingle CPU core clocked at 3.6 GHz, as the size of the incoming\npackets varies. The average latency to execute the XDP program\nis also shown in nanoseconds (c). CPU usage is tied to the\nnumber of packets (not bits) processed per second. Further,\nsignificant time elapses in dispatch:\nCPU work to present the input packet to and retrieve the\noutput packet from the program computation.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x3.png" | |
| }, | |
| "2(c)": { | |
| "figure_path": "2309.14647v2_figure_2(c).png", | |
| "caption": "(c) Latency (ns)\nFigure 2: The nature of CPU work in high-speed packet processing:\nConsider the throughput of a simple packet forwarding\napplication (packets/second (a), bits/second (b)) running on a\nsingle CPU core clocked at 3.6 GHz, as the size of the incoming\npackets varies. The average latency to execute the XDP program\nis also shown in nanoseconds (c). CPU usage is tied to the\nnumber of packets (not bits) processed per second. Further,\nsignificant time elapses in dispatch:\nCPU work to present the input packet to and retrieve the\noutput packet from the program computation.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x4.png" | |
| }, | |
| "3(a)": { | |
| "figure_path": "2309.14647v2_figure_3(a).png", | |
| "caption": "(a) The sequencer stores relevant fields from the packet\nhistory, and piggybacks the history on packets sprayed\nround-robin across cores.\nFigure 3: An example illustrating the scaling principles. pisubscript\ud835\udc5d\ud835\udc56p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is\nthe it\u2062hsuperscript\ud835\udc56\ud835\udc61\u210ei^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT packet received by the sequencer, f\u2062(pj)\ud835\udc53subscript\ud835\udc5d\ud835\udc57f(p_{j})italic_f ( italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) are\nrelevant fields from pjsubscript\ud835\udc5d\ud835\udc57p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and Sisubscript\ud835\udc46\ud835\udc56S_{i}italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the state after\nprocessing packets p1,\u2026,pisubscript\ud835\udc5d1\u2026subscript\ud835\udc5d\ud835\udc56p_{1},...,p_{i}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in order.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x5.png" | |
| }, | |
| "3(b)": { | |
| "figure_path": "2309.14647v2_figure_3(b).png", | |
| "caption": "(b) Each core fast-forwards its private state and then handles\nits packet.\nFigure 3: An example illustrating the scaling principles. pisubscript\ud835\udc5d\ud835\udc56p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is\nthe it\u2062hsuperscript\ud835\udc56\ud835\udc61\u210ei^{th}italic_i start_POSTSUPERSCRIPT italic_t italic_h end_POSTSUPERSCRIPT packet received by the sequencer, f\u2062(pj)\ud835\udc53subscript\ud835\udc5d\ud835\udc57f(p_{j})italic_f ( italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) are\nrelevant fields from pjsubscript\ud835\udc5d\ud835\udc57p_{j}italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT, and Sisubscript\ud835\udc46\ud835\udc56S_{i}italic_S start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the state after\nprocessing packets p1,\u2026,pisubscript\ud835\udc5d1\u2026subscript\ud835\udc5d\ud835\udc56p_{1},...,p_{i}italic_p start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , \u2026 , italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT in order.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x6.png" | |
| }, | |
| "4(a)": { | |
| "figure_path": "2309.14647v2_figure_4(a).png", | |
| "caption": "(a) Packet format\nFigure 4: Hardware data structures.\n(a) Packets modified to propagate history from the sequencer to\nCPU cores. The sequencer prefixes the packet history to the\noriginal packet, which allows for a simpler implementation in\nhardware (\u00a73.3) and simpler transformations to\nmake a packet-processing program SCR-aware\n(App.C). In instantiations where the sequencer is\npartly implemented on a top-of-the-rack switch\n(\u00a73.2), we further prefix a dummy Ethernet\nheader to ensure that the NIC can process the packet correctly.\n(b) The data structure used to maintain and propagate packet\nhistory on the Tofino programmable switch pipeline\n(\u00a73.3.2). Inset shows the specific\nactions performed on each Tofino register.\n(c) The data structure used to maintain and propagate packet\nhistory on our Verilog module integrated into NetFPGA-PLUS\n(\u00a73.3.2).", | |
| "url": "http://arxiv.org/html/2309.14647v2/x7.png" | |
| }, | |
| "4(b)": { | |
| "figure_path": "2309.14647v2_figure_4(b).png", | |
| "caption": "(b) Tofino sequencer\nFigure 4: Hardware data structures.\n(a) Packets modified to propagate history from the sequencer to\nCPU cores. The sequencer prefixes the packet history to the\noriginal packet, which allows for a simpler implementation in\nhardware (\u00a73.3) and simpler transformations to\nmake a packet-processing program SCR-aware\n(App.C). In instantiations where the sequencer is\npartly implemented on a top-of-the-rack switch\n(\u00a73.2), we further prefix a dummy Ethernet\nheader to ensure that the NIC can process the packet correctly.\n(b) The data structure used to maintain and propagate packet\nhistory on the Tofino programmable switch pipeline\n(\u00a73.3.2). Inset shows the specific\nactions performed on each Tofino register.\n(c) The data structure used to maintain and propagate packet\nhistory on our Verilog module integrated into NetFPGA-PLUS\n(\u00a73.3.2).", | |
| "url": "http://arxiv.org/html/2309.14647v2/x8.png" | |
| }, | |
| "4(c)": { | |
| "figure_path": "2309.14647v2_figure_4(c).png", | |
| "caption": "(c) RTL sequencer\nFigure 4: Hardware data structures.\n(a) Packets modified to propagate history from the sequencer to\nCPU cores. The sequencer prefixes the packet history to the\noriginal packet, which allows for a simpler implementation in\nhardware (\u00a73.3) and simpler transformations to\nmake a packet-processing program SCR-aware\n(App.C). In instantiations where the sequencer is\npartly implemented on a top-of-the-rack switch\n(\u00a73.2), we further prefix a dummy Ethernet\nheader to ensure that the NIC can process the packet correctly.\n(b) The data structure used to maintain and propagate packet\nhistory on the Tofino programmable switch pipeline\n(\u00a73.3.2). Inset shows the specific\nactions performed on each Tofino register.\n(c) The data structure used to maintain and propagate packet\nhistory on our Verilog module integrated into NetFPGA-PLUS\n(\u00a73.3.2).", | |
| "url": "http://arxiv.org/html/2309.14647v2/x9.png" | |
| }, | |
| "5(a)": { | |
| "figure_path": "2309.14647v2_figure_5(a).png", | |
| "caption": "(a) University DC\nFigure 5: Flow size distributions of the packet traces we used. We\nused real packet traces captured at (a) university data\ncenter [35] and (b) wide-area\nInternet backbone by CAIDA [11]. We also synthesized (c)\na packet trace with real TCP flows whose sizes are drawn from\nMicrosoft\u2019s data center flow size\ndistribution [32].", | |
| "url": "http://arxiv.org/html/2309.14647v2/x10.png" | |
| }, | |
| "5(b)": { | |
| "figure_path": "2309.14647v2_figure_5(b).png", | |
| "caption": "(b) Internet backbone\nFigure 5: Flow size distributions of the packet traces we used. We\nused real packet traces captured at (a) university data\ncenter [35] and (b) wide-area\nInternet backbone by CAIDA [11]. We also synthesized (c)\na packet trace with real TCP flows whose sizes are drawn from\nMicrosoft\u2019s data center flow size\ndistribution [32].", | |
| "url": "http://arxiv.org/html/2309.14647v2/x11.png" | |
| }, | |
| "5(c)": { | |
| "figure_path": "2309.14647v2_figure_5(c).png", | |
| "caption": "(c) Hyperscalar DC\nFigure 5: Flow size distributions of the packet traces we used. We\nused real packet traces captured at (a) university data\ncenter [35] and (b) wide-area\nInternet backbone by CAIDA [11]. We also synthesized (c)\na packet trace with real TCP flows whose sizes are drawn from\nMicrosoft\u2019s data center flow size\ndistribution [32].", | |
| "url": "http://arxiv.org/html/2309.14647v2/x12.png" | |
| }, | |
| "6": { | |
| "figure_path": "2309.14647v2_figure_6.png", | |
| "caption": "Figure 6: Throughput (\u00a74.1) in millions of\npackets per second (Mpps) of four stateful packet-processing\nprograms implemented using state-compute replication\n(\u00a73), shared state, and sharding\n(\u00a72).\nPacket traffic is replayed from real data center and Internet backbone\ntraces.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x13.png" | |
| }, | |
| "7": { | |
| "figure_path": "2309.14647v2_figure_7.png", | |
| "caption": "Figure 7: Throughput of TCP connection tracking\nparallelized using four techniques, SCR (\u00a73), shared\nstate, sharding with RSS, and sharding with\nRSS++ [34], on a hyperscalar data center trace\n(\u00a74.1).", | |
| "url": "http://arxiv.org/html/2309.14647v2/x14.png" | |
| }, | |
| "8": { | |
| "figure_path": "2309.14647v2_figure_8.png", | |
| "caption": "Figure 8: Hardware performance metrics drawn from Intel PCM while\nexecuting the token bucket program. As the offered load\nincreases, we show the program\u2019s compute latency (measured purely\nfor the XDP portion), the L2 hit ratio, and the number of\ninstructions retired per CPU clock cycle (IPC), when the program\nis scaled to 2, 4, or 7 cores. Packet traffic is from a\nuniversity data center (\u00a74.1).", | |
| "url": "http://arxiv.org/html/2309.14647v2/x15.png" | |
| }, | |
| "9(a)": { | |
| "figure_path": "2309.14647v2_figure_9(a).png", | |
| "caption": "(a) Packets/second (1 rxq)\nFigure 9: Evaluating the throughput scaling of a stateless program\nusing SCR, as the compute latency of the program varies\nbut the dispatch latency remains constant, (a) in\npackets/second, and (b) normalized against single-core throughput\nat the same compute latency. As discussed in \u00a73.1,\nthe more the dispatch time dominates compute time, the more\neffective the multi-core scaling from SCR.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x16.png" | |
| }, | |
| "9(b)": { | |
| "figure_path": "2309.14647v2_figure_9(b).png", | |
| "caption": "(b) Packets/second (2 rxq)\nFigure 9: Evaluating the throughput scaling of a stateless program\nusing SCR, as the compute latency of the program varies\nbut the dispatch latency remains constant, (a) in\npackets/second, and (b) normalized against single-core throughput\nat the same compute latency. As discussed in \u00a73.1,\nthe more the dispatch time dominates compute time, the more\neffective the multi-core scaling from SCR.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x17.png" | |
| }, | |
| "9(c)": { | |
| "figure_path": "2309.14647v2_figure_9(c).png", | |
| "caption": "(c) Normalized to 1 core\nFigure 9: Evaluating the throughput scaling of a stateless program\nusing SCR, as the compute latency of the program varies\nbut the dispatch latency remains constant, (a) in\npackets/second, and (b) normalized against single-core throughput\nat the same compute latency. As discussed in \u00a73.1,\nthe more the dispatch time dominates compute time, the more\neffective the multi-core scaling from SCR.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x18.png" | |
| }, | |
| "10": { | |
| "figure_path": "2309.14647v2_figure_10.png", | |
| "caption": "Figure 10: (a) The throughput of a token bucket policer on the\nuniversity data center trace (\u00a74.1), while\ntruncating all packets in the trace to 64 bytes, with only SCR adding metadata to packets before feeding them to the NIC. (b)\nThe throughput of a port-knocking firewall on the university data\ncenter trace. SCR is run with and without loss recovery\n(\u00a73.4) at multiple packet loss rates.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x19.png" | |
| }, | |
| "11(a)": { | |
| "figure_path": "2309.14647v2_figure_11(a).png", | |
| "caption": "(a) DDoS mitigation\nFigure 11: Predicted and actual throughput (\u00a74.1)\nin millions of packets per second (Mpps) of five stateful packet-processing\nprograms implemented using SCR (\u00a73). The workloads of\n(a)-(d) and (e) are from a university data center and a hyperscalar\ndata center (\u00a74.1) separately.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x20.png" | |
| }, | |
| "11(b)": { | |
| "figure_path": "2309.14647v2_figure_11(b).png", | |
| "caption": "(b) Heavy hitter detector\nFigure 11: Predicted and actual throughput (\u00a74.1)\nin millions of packets per second (Mpps) of five stateful packet-processing\nprograms implemented using SCR (\u00a73). The workloads of\n(a)-(d) and (e) are from a university data center and a hyperscalar\ndata center (\u00a74.1) separately.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x21.png" | |
| }, | |
| "11(c)": { | |
| "figure_path": "2309.14647v2_figure_11(c).png", | |
| "caption": "(c) Token bucket policer\nFigure 11: Predicted and actual throughput (\u00a74.1)\nin millions of packets per second (Mpps) of five stateful packet-processing\nprograms implemented using SCR (\u00a73). The workloads of\n(a)-(d) and (e) are from a university data center and a hyperscalar\ndata center (\u00a74.1) separately.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x22.png" | |
| }, | |
| "11(d)": { | |
| "figure_path": "2309.14647v2_figure_11(d).png", | |
| "caption": "(d) Port-knocking firewall\nFigure 11: Predicted and actual throughput (\u00a74.1)\nin millions of packets per second (Mpps) of five stateful packet-processing\nprograms implemented using SCR (\u00a73). The workloads of\n(a)-(d) and (e) are from a university data center and a hyperscalar\ndata center (\u00a74.1) separately.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x23.png" | |
| }, | |
| "11(e)": { | |
| "figure_path": "2309.14647v2_figure_11(e).png", | |
| "caption": "(e) TCP connection tracking\nFigure 11: Predicted and actual throughput (\u00a74.1)\nin millions of packets per second (Mpps) of five stateful packet-processing\nprograms implemented using SCR (\u00a73). The workloads of\n(a)-(d) and (e) are from a university data center and a hyperscalar\ndata center (\u00a74.1) separately.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x24.png" | |
| }, | |
| "12": { | |
| "figure_path": "2309.14647v2_figure_12.png", | |
| "caption": "Figure 12: A state machine for a simple port-knocking firewall.", | |
| "url": "http://arxiv.org/html/2309.14647v2/x25.png" | |
| } | |
| }, | |
| "validation": true, | |
| "references": [ | |
| { | |
| "1": { | |
| "title": "[Online, Retrieved Feb 21, 2023.]\nhttps://www.intel.com/content/www/us/en/products/details/network-io/ipu.html.", | |
| "author": "Intel IPU.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "2": { | |
| "title": "[Online, Retrieved Nov 05, 2020.]\nhttps://www.kernel.org/doc/Documentation/networking/filter.txt.", | |
| "author": "Linux Socket Filtering aka Berkeley Packet Filter (BPF).", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "3": { | |
| "title": "[Online, Retrieved Feb 21, 2023.]\nhttps://www.nvidia.com/en-us/networking/products/data-processing-unit.", | |
| "author": "NVIDIA BlueField DPU.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "4": { | |
| "title": "[Online, Retrieved Feb 21, 2023.]\nhttps://www.kernel.org/doc/Documentation/networking/scaling.txt.", | |
| "author": "Receive Side Scaling.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "5": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.rfc-editor.org/rfc/rfc2544, 1999.", | |
| "author": "Benchmarking Methodology for Network Interconnect Devices.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "6": { | |
| "title": "[Online, Retrieved May 02, 2024.]\nhttps://1.ieee802.org/dcb/802-1qbb/, 2011.", | |
| "author": "IEEE 802.1Qbb \u2013 Priority-based Flow Control.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "7": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.intel.com/content/www/us/en/developer/articles/training/setting-up-intel-ethernet-flow-director.html,\n2017.", | |
| "author": "How to set up Intel Ethernet Flow Director.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "8": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://engineering.fb.com/2018/05/22/open-source/open-sourcing-katran-a-scalable-network-load-balancer/,\n2018.", | |
| "author": "Open-sourcing Katran, a scalable network load balancer.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "9": { | |
| "title": "[Online, Retrieved Apr 29, 2024.]\nhttps://github.com/rsspp/linux/commit/4e09cf8be6ac5b0a06cc5b92c62f758f29e3b6aa,\n2019.", | |
| "author": "A kernel patch to support RSS++.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "10": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://lwn.net/Articles/779120/, 2019.", | |
| "author": "Concurrency management in eBPF.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "11": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.caida.org/catalog/datasets/passive_dataset, 2019.", | |
| "author": "The CAIDA UCSD Anonymized Internet Traces - 2019.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "12": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://github.com/NetFPGA/NetFPGA-PLUS, 2021.", | |
| "author": "NetFPGA-PLUS.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "13": { | |
| "title": "[Online, Retrieved Apr 14, 2024.]\nhttps://www.nvidia.com/content/dam/en-zz/Solutions/networking/ethernet-adapters/ConnectX-6-Dx-Datasheet.pdf,\n2021.", | |
| "author": "Nvidia ConnectX-6 DX.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "14": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://developers.redhat.com/articles/2022/06/22/\nmeasuring-bpf-performance-tips-tricks-and-best-practices, 2022.", | |
| "author": "Measuring BPF performance: Tips, tricks, and best practices.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "15": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.man7.org/linux/man-pages/man7/bpf-helpers.7.html, 2023.", | |
| "author": "BPF helpers manual page.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "16": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.kernel.org/doc/html/latest/bpf/maps.html, 2023.", | |
| "author": "BPF maps.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "17": { | |
| "title": "[Online, Retrieved Jul 22, 2023.]\nhttps://www.broadcom.com/products/ethernet-connectivity/switching/strataxgs/bcm56880-series,\n2023.", | |
| "author": "Broadcom Trident 4: BCM56880 Series.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "18": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://gcc.gnu.org/onlinedocs/gcc-13.2.0/gcc/_005f_005fatomic-Builtins.html,\n2023.", | |
| "author": "Built-in Functions for Memory Model Aware Atomic Operations.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "19": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.kernel.org/doc/html/latest/networking/checksum-offloads.html,\n2023.", | |
| "author": "Checksum Offloads.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "20": { | |
| "title": "[Online, Retrieved May 1, 2024.]\nhttps://resources.nvidia.com/en-us-accelerated-networking-resource-library/connectx-7-datasheet,\n2023.", | |
| "author": "ConnectX-7 400G adapters.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "21": { | |
| "title": "[Online, Retrieved Jul 22, 2023.]\nhttps://www.intel.com/content/www/us/en/developer/topic-technology/networking/dpdk.html,\n2023.", | |
| "author": "Data Plane Development Kit.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "22": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.intel.com/content/www/us/en/io/data-direct-i-o-technology.html,\n2023.", | |
| "author": "Intel Data Direct I/O technology.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "23": { | |
| "title": "[Online, Retrieved Jul 22, 2023.]\nhttps://www.intel.com/content/www/us/en/products/network-io/programmable-ethernet-switch/tofino-series.html,\n2023.", | |
| "author": "Intel Tofino.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "24": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.intel.com/content/www/us/en/developer/articles/tool/performance-counter-monitor.html,\n2023.", | |
| "author": "Introduction to Intel Performance Counter Monitor (PCM).", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "25": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://gcc.gnu.org/onlinedocs/gcc-13.2.0/gcc/_005f_005fsync-Builtins.html,\n2023.", | |
| "author": "Legacy __sync Built-in Functions for Atomic Memory Access.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "26": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://github.com/barefootnetworks/Open-Tofino/blob/master/PUBLIC_Tofino-Native-Arch.pdf,\n2023.", | |
| "author": "Open Tofino: P4 Intel Tofino Native Architecture - Public version.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "27": { | |
| "title": "[Online, Retrieved Jul 22, 2023.]\nhttps://www.amd.com/en/accelerators/pensando, 2023.", | |
| "author": "Pensando infrastructure accelerators.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "28": { | |
| "title": "[Online, Retrieved Jul 22, 2023.]\nhttps://help.ubuntu.com/community/PortKnocking, 2023.", | |
| "author": "Port knocking.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "29": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.kernel.org/doc/html/latest/networking/segmentation-offloads.html,\n2023.", | |
| "author": "Segmentation Offloads.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "30": { | |
| "title": "[Online, Retrieved Sep 17, 2023.]\nhttps://www.chelsio.com/nic/tcp-offload-engine/, 2023.", | |
| "author": "TCP Offload Engine: Chelsio Communications.", | |
| "venue": null, | |
| "url": null | |
| } | |
| }, | |
| { | |
| "31": { | |
| "title": "Understanding host interconnect congestion.", | |
| "author": "Saksham Agarwal, Rachit Agarwal, Behnam Montazeri, Masoud Moshref, Khaled\nElmeleegy, Luigi Rizzo, Marc Asher de Kruijf, Gautam Kumar, Sylvia Ratnasamy,\nDavid Culler, and Amin Vahdat.", | |
| "venue": "In Proceedings of the ACM Workshop on Hot Topics in Networks\n(HotNets), HotNets \u201922, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "32": { | |
| "title": "Data center tcp (dctcp).", | |
| "author": "Mohammad Alizadeh, Albert Greenberg, David A Maltz, Jitendra Padhye, Parveen\nPatel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan.", | |
| "venue": "In Proceedings of the ACM SIGCOMM 2010 Conference, 2010.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "33": { | |
| "title": "Enabling programmable transport protocols in High-Speed NICs.", | |
| "author": "Mina Tahmasbi Arashloo, Alexey Lavrov, Manya Ghobadi, Jennifer Rexford, David\nWalker, and David Wentzlaff.", | |
| "venue": "In 17th USENIX Symposium on Networked Systems Design and\nImplementation (NSDI), 2020.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "34": { | |
| "title": "Rss++: Load and state-aware receive side scaling.", | |
| "author": "Tom Barbette, Georgios P. Katsikas, Gerald Q. Maguire, and Dejan Kosti\u0107.", | |
| "venue": "In Proceedings of the 15th International Conference on Emerging\nNetworking Experiments And Technologies, 2019.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "35": { | |
| "title": "Understanding data center traffic characteristics.", | |
| "author": "Theophilus Benson, Ashok Anand, Aditya Akella, and Ming Zhang.", | |
| "venue": "SIGCOMM Comput. Commun. Rev., 2010.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "36": { | |
| "title": "K8s Service Load Balancing with BPF & XDP.", | |
| "author": "Daniel Borkmann and Martynas Pumputis.", | |
| "venue": "[Online. Retrieved Jan 23, 2021.]\nhttps://linuxplumbersconf.org/event/7/contributions/674/attachments/568/1002/plumbers_2020_cilium_load_balancer.pdf,\n2020.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "37": { | |
| "title": "Forwarding metamorphosis: Fast programmable match-action processing\nin hardware for SDN.", | |
| "author": "Pat Bosshart et al.", | |
| "venue": "SIGCOMM, 2013.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "38": { | |
| "title": "Towards s tail latency and terabit ethernet: Disaggregating the\nhost network stack.", | |
| "author": "Qizhe Cai, Midhul Vuppalapati, Jaehyun Hwang, Christos Kozyrakis, and Rachit\nAgarwal.", | |
| "venue": "In Proceedings of the ACM SIGCOMM 2022 Conference, 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "39": { | |
| "title": "Connection Tracking (conntrack): Design and Implementation Inside\nLinux Kernel.", | |
| "author": "Arthur Chiao.", | |
| "venue": "[Online, Retrieved Sep 17, 2023.]\nhttps://arthurchiao.art/blog/conntrack-design-and-implementation/,\n2020.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "40": { | |
| "title": "Rpcvalet: Ni-driven tail-aware balancing of us-scale rpcs.", | |
| "author": "Alexandros Daglis, Mark Sutherland, and Babak Falsafi.", | |
| "venue": "In Proceedings of the Twenty-Fourth International Conference on\nArchitectural Support for Programming Languages and Operating Systems, 2019.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "41": { | |
| "title": "Maglev: A fast and reliable software network load balancer.", | |
| "author": "Daniel E Eisenbud, Cheng Yi, Carlo Contavalli, Cody Smith, Roman Kononov, Eric\nMann-Hielscher, Ardas Cilingiroglu, Bin Cheyney, Wentao Shang, and\nJinnah Dylan Hosein.", | |
| "venue": "In 13th USENIX Symposium on Networked Systems Design and\nImplementation (NSDI 16), pages 523\u2013535, 2016.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "42": { | |
| "title": "L4drop: Xdp ddos mitigations.", | |
| "author": "Arthur Fabre.", | |
| "venue": "[Online, Retrieved Jul 25, 2023.]\nhttps://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitigations/.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "43": { | |
| "title": "Azure accelerated networking: Smartnics in the public cloud.", | |
| "author": "D. Firestone, A. Putnam, S. Mundkur, D. Chiou, A. Dabagh, M. Andrewartha,\nH. Angepat, V. Bhanu, A. Caulfield, E. Chung, et al.", | |
| "venue": "In NSDI, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "44": { | |
| "title": "Opennf: Enabling innovation in network function control.", | |
| "author": "Aaron Gember-Jacobson, Raajay Viswanathan, Chaithan Prakash, Robert Grandl,\nJunaid Khalid, Sourav Das, and Aditya Akella.", | |
| "venue": "SIGCOMM Comput. Commun. Rev., 2014.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "45": { | |
| "title": "High-speed connection tracking in modern servers.", | |
| "author": "Massimo Girondi, Marco Chiesa, and Tom Barbette.", | |
| "venue": "In 2021 IEEE 22nd International Conference on High Performance\nSwitching and Routing (HPSR), 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "46": { | |
| "title": "Rdma over commodity ethernet at scale.", | |
| "author": "Chuanxiong Guo, Haitao Wu, Zhong Deng, Gaurav Soni, Jianxi Ye, Jitu Padhye, and\nMarina Lipshteyn.", | |
| "venue": "In Proceedings of the ACM SIGCOMM Conference, 2016.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "47": { | |
| "title": "The express data path: Fast programmable packet processing in the\noperating system kernel.", | |
| "author": "Toke H\u00f8iland-J\u00f8rgensen, Jesper Dangaard Brouer, Daniel Borkmann, John\nFastabend, Tom Herbert, David Ahern, and David Miller.", | |
| "venue": "In Proceedings of the 14th International Conference on Emerging\nNetworking EXperiments and Technologies, CoNEXT \u201918, page 54\u201366, New\nYork, NY, USA, 2018. Association for Computing Machinery.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "48": { | |
| "title": "The nanopu: A nanosecond network stack for datacenters.", | |
| "author": "Stephen Ibanez, Alex Mallery, Serhat Arslan, Theo Jepsen, Muhammad Shahbaz,\nChanghoon Kim, and Nick McKeown.", | |
| "venue": "In USENIX Symposium on Operating Systems Design and\nImplementation (OSDI), 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "49": { | |
| "title": "Accelerating networking with AF_XDP.", | |
| "author": "Jonathan Corbet.", | |
| "venue": "[Online. Retrieved Jan 20, 2021.]\nhttps://lwn.net/Articles/750845/, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "50": { | |
| "title": "Debugging transient faults in data centers using synchronized\nnetwork-wide packet histories.", | |
| "author": "Pravein Govindan Kannan et al.", | |
| "venue": "In NSDI, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "51": { | |
| "title": "Metron: NFV service chains at the true speed of the underlying\nhardware.", | |
| "author": "Georgios P. Katsikas, Tom Barbette, Dejan Kosti\u0107, Rebecca Steinert, and\nGerald Q. Maguire Jr.", | |
| "venue": "In USENIX Symposium on Networked Systems Design and\nImplementation (NSDI), 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "52": { | |
| "title": "Tas: Tcp acceleration as an os service.", | |
| "author": "Antoine Kaufmann, Tim Stamler, Simon Peter, Naveen Kr Sharma, Arvind\nKrishnamurthy, and Thomas Anderson.", | |
| "venue": "In Proceedings of the Fourteenth EuroSys Conference 2019, 2019.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "53": { | |
| "title": "Snap: A microkernel approach to host networking.", | |
| "author": "Michael Marty, Marc de Kruijf, Jacob Adriaens, Christopher Alfeld, Sean Bauer,\nCarlo Contavalli, Michael Dalton, Nandita Dukkipati, William C. Evans, Steve\nGribble, Nicholas Kidd, Roman Kononov, Gautam Kumar, Carl Mauer, Emily\nMusick, Lena Olson, Erik Rubow, Michael Ryan, Kevin Springborn, Paul Turner,\nValas Valancius, Xi Wang, and Amin Vahdat.", | |
| "venue": "In Proceedings of the 27th ACM Symposium on Operating Systems\nPrinciples, SOSP \u201919, 2019.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "54": { | |
| "title": "Automatic parallelization: an overview of fundamental compiler\ntechniques.", | |
| "author": "Samuel Midkiff.", | |
| "venue": "2012.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "55": { | |
| "title": "Data-parallel finite-state machines.", | |
| "author": "Todd Mytkowicz, Madanlal Musuvathi, and Wolfram Schulte.", | |
| "venue": "In Proceedings of the 19th International Conference on\nArchitectural Support for Programming Languages and Operating Systems\n(ASPLOS), 2014.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "56": { | |
| "title": "Understanding pcie performance for end host networking.", | |
| "author": "Rolf Neugebauer, Gianni Antichi, Jos\u00e9 Fernando Zazo, Yury Audzevich, Sergio\nL\u00f3pez-Buedo, and Andrew W. Moore.", | |
| "venue": "In Proceedings of the 2018 Conference of the ACM Special\nInterest Group on Data Communication, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "57": { | |
| "title": "Sailfish: Accelerating cloud-scale multi-tenant multi-service\ngateways with programmable switches.", | |
| "author": "Tian Pan, Nianbing Yu, Chenhao Jia, Jianwen Pi, Liang Xu, Yisong Qiao, Zhiguo\nLi, Kun Liu, Jie Lu, Jianyuan Lu, Enge Song, Jiao Zhang, Tao Huang, and\nShunmin Zhu.", | |
| "venue": "In Proceedings of the 2021 ACM SIGCOMM 2021 Conference, 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "58": { | |
| "title": "NetBricks: Taking the v out of NFV.", | |
| "author": "Aurojit Panda, Sangjin Han, Keon Jang, Melvin Walls, Sylvia Ratnasamy, and\nScott Shenker.", | |
| "venue": "In 12th USENIX Symposium on Operating Systems Design and\nImplementation (OSDI), 2016.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "59": { | |
| "title": "Automatic parallelization of software network functions.", | |
| "author": "Francisco Pereira, Fernando M.V. Ramos, and Luis Pedrosa.", | |
| "venue": "In USENIX Symposium on Networked Systems Design and\nImplementation (NSDI), 2024.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "60": { | |
| "title": "Floem: A programming system for NIC-Accelerated network\napplications.", | |
| "author": "Phitchaya Mangpo Phothilimthana, Ming Liu, Antoine Kaufmann, Simon Peter,\nRastislav Bodik, and Thomas Anderson.", | |
| "venue": "In 13th USENIX Symposium on Operating Systems Design and\nImplementation (OSDI), 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "61": { | |
| "title": "Scalable fsm parallelization via path fusion and higher-order\nspeculation.", | |
| "author": "Junqiao Qiu, Xiaofan Sun, Amir Hossein Nodehi Sabet, and Zhijia Zhao.", | |
| "venue": "In Proceedings of the 26th ACM International Conference on\nArchitectural Support for Programming Languages and Operating Systems\n(ASPLOS), 2021.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "62": { | |
| "title": "Enabling scalability-sensitive speculative parallelization for fsm\ncomputations.", | |
| "author": "Junqiao Qiu, Zhijia Zhao, Bo Wu, Abhinav Vishnu, and Shuaiwen Leon Song.", | |
| "venue": "In Proceedings of the International Conference on\nSupercomputing, 2017.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "63": { | |
| "title": "Optimizing BPF: Smaller Programs for Greater Performance.", | |
| "author": "Quentin Monnet.", | |
| "venue": "[Online. Retrieved Jan 20, 2021.]\nhttps://www.netronome.com/blog/optimizing-bpf-smaller-programs-greater-performance/,\n2020.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "64": { | |
| "title": "Split/Merge: System support for elastic execution in virtual\nmiddleboxes.", | |
| "author": "Shriram Rajagopalan, Dan Williams, Hani Jamjoom, and Andrew Warfield.", | |
| "venue": "In USENIX Symposium on Networked Systems Design and\nImplementation, 2013.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "65": { | |
| "title": "Netmap: a novel framework for fast packet I/O.", | |
| "author": "Luigi Rizzo.", | |
| "venue": "In USENIX annual technical conference (ATC), 2012.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "66": { | |
| "title": "Inside the social network\u2019s (datacenter) network.", | |
| "author": "Arjun Roy, Hongyi Zeng, Jasmeet Bagga, George Porter, and Alex C. Snoeren.", | |
| "venue": "In ACM SIGCOMM, 2015.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "67": { | |
| "title": "A case for spraying packets in software middleboxes.", | |
| "author": "Hugo Sadok, Miguel Elias M. Campista, and Lu\u00eds Henrique M. K. Costa.", | |
| "venue": "In Proceedings of the 17th ACM Workshop on Hot Topics in\nNetworks, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "68": { | |
| "title": "FlexTOE: Flexible TCP offload with Fine-Grained parallelism.", | |
| "author": "Rajath Shashidhara, Tim Stamler, Antoine Kaufmann, and Simon Peter.", | |
| "venue": "In USENIX Symposium on Networked Systems Design and\nImplementation (NSDI), 2022.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "69": { | |
| "title": "XDP: 1.5 years in production. Evolution and lessons learned.", | |
| "author": "Nikita V. Shirokov.", | |
| "venue": "In Linux Plumbers Conference, 2018.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "70": { | |
| "title": "Scalable TCP Session Monitoring with Symmetric Receive-side\nScaling.", | |
| "author": "Shinae Woo and KyoungSoo Park.", | |
| "venue": "Technical report, KAIST, 2020.", | |
| "url": null | |
| } | |
| }, | |
| { | |
| "71": { | |
| "title": "On the characteristics and origins of internet flow rates.", | |
| "author": "Yin Zhang, Lee Breslau, Vern Paxson, and Scott Shenker.", | |
| "venue": "In ACM SIGCOMM, 2002.", | |
| "url": null | |
| } | |
| } | |
| ], | |
| "url": "http://arxiv.org/html/2309.14647v2" | |
| } |