Datagram Network

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3177 Experts worldwide ranked by ideXlab platform

Marco Dorigo - One of the best experts on this subject based on the ideXlab platform.

  • mobile agents for adaptive routing
    Hawaii International Conference on System Sciences, 1998
    Co-Authors: G Di Caro, Marco Dorigo
    Abstract:

    This paper introduces AntNet, a new routing algorithm for telecommunication Networks. AntNet is an adaptive, distributed, mobile-agents-based algorithm which was inspired by recent work on the ant colony metaphor. We apply AntNet in a Datagram Network and compare it with both static and adaptive state-of-the-art routing algorithms. We ran experiments for various paradigmatic temporal and spatial traffic distributions. AntNet showed both very good performances and robustness under all the experimental conditions with respect to its competitors.

Reuven Cohen - One of the best experts on this subject based on the ideXlab platform.

  • restricted dynamic steiner trees for scalable multicast in Datagram Networks
    IEEE ACM Transactions on Networking, 1998
    Co-Authors: Ehud Aharoni, Reuven Cohen
    Abstract:

    The paper addresses the issue of minimizing the number of nodes involved in routing over a multicast tree and in the maintenance of such a tree in a Datagram Network. It presents a scheme where the tree routing and maintenance burden is laid only upon the source node and the destination nodes associated with the multicast tree. The main concept behind this scheme is to view each multicast tree as a collection of unicast paths and to locate only the multicast source and destination nodes on the junctions of their multicast tree. The paper shows that despite this restriction, the cost of the created multicast trees is not necessarily higher than the cost of the trees created by other algorithms that do not impose the restriction and therefore require all nodes along the data path of a tree to participate in routing over the tree and in the maintenance of the tree.

  • restricted dynamic steiner trees for scalable multicast in Datagram Networks
    International Conference on Computer Communications, 1997
    Co-Authors: Ehud Aharoni, Reuven Cohen
    Abstract:

    The paper addresses the issue of minimizing the number of nodes involved in the routing over a multicast tree and in the maintenance of such a tree in a Datagram Network. It presents a scheme where the tree routing and maintenance burden is laid only upon the source node and the destination nodes associated with the multicast tree. The main concepts behind this scheme is to view each multicast tree as a collection of unicast paths, and to locate only the multicast source and destination nodes on the junctions of their multicast tree. The paper shows that despite of this restriction, the cost of the created multicast trees is not necessarily higher than the cost of the trees created by other algorithms that do not impose the restriction and therefore require all the nodes along the data path of a tree to participate in the routing over the tree and in the maintenance of the tree.

G Di Caro - One of the best experts on this subject based on the ideXlab platform.

  • mobile agents for adaptive routing
    Hawaii International Conference on System Sciences, 1998
    Co-Authors: G Di Caro, Marco Dorigo
    Abstract:

    This paper introduces AntNet, a new routing algorithm for telecommunication Networks. AntNet is an adaptive, distributed, mobile-agents-based algorithm which was inspired by recent work on the ant colony metaphor. We apply AntNet in a Datagram Network and compare it with both static and adaptive state-of-the-art routing algorithms. We ran experiments for various paradigmatic temporal and spatial traffic distributions. AntNet showed both very good performances and robustness under all the experimental conditions with respect to its competitors.

Ehud Aharoni - One of the best experts on this subject based on the ideXlab platform.

  • restricted dynamic steiner trees for scalable multicast in Datagram Networks
    IEEE ACM Transactions on Networking, 1998
    Co-Authors: Ehud Aharoni, Reuven Cohen
    Abstract:

    The paper addresses the issue of minimizing the number of nodes involved in routing over a multicast tree and in the maintenance of such a tree in a Datagram Network. It presents a scheme where the tree routing and maintenance burden is laid only upon the source node and the destination nodes associated with the multicast tree. The main concept behind this scheme is to view each multicast tree as a collection of unicast paths and to locate only the multicast source and destination nodes on the junctions of their multicast tree. The paper shows that despite this restriction, the cost of the created multicast trees is not necessarily higher than the cost of the trees created by other algorithms that do not impose the restriction and therefore require all nodes along the data path of a tree to participate in routing over the tree and in the maintenance of the tree.

  • restricted dynamic steiner trees for scalable multicast in Datagram Networks
    International Conference on Computer Communications, 1997
    Co-Authors: Ehud Aharoni, Reuven Cohen
    Abstract:

    The paper addresses the issue of minimizing the number of nodes involved in the routing over a multicast tree and in the maintenance of such a tree in a Datagram Network. It presents a scheme where the tree routing and maintenance burden is laid only upon the source node and the destination nodes associated with the multicast tree. The main concepts behind this scheme is to view each multicast tree as a collection of unicast paths, and to locate only the multicast source and destination nodes on the junctions of their multicast tree. The paper shows that despite of this restriction, the cost of the created multicast trees is not necessarily higher than the cost of the trees created by other algorithms that do not impose the restriction and therefore require all the nodes along the data path of a tree to participate in the routing over the tree and in the maintenance of the tree.

John W Lockwood - One of the best experts on this subject based on the ideXlab platform.

  • A NEW ARCHITECTURE PERFORMS CONTENT SCANNING OF TCP FLOWS IN HIGH- SPEED NetworkS. COMBINING A TCP PROCESSING ENGINE, A PER-FLOW STATE STORE, AND A CONTENT-SCANNING ENGINE, THIS ARCHITECTURE PERMITS COMPLETE PAYLOAD INSPECTIONS ON 8 MILLION TCP FLOWS
    2020
    Co-Authors: David V Schuehler, James Moscola, John W Lockwood
    Abstract:

    The Transmission Control Protocol is the workhorse protocol of the Internet. Most of the data passing through the Internet transits the Network using TCP layered atop the Internet Protocol (IP). Monitoring, capturing, filtering, and blocking traffic on highspeed Internet links requires the ability to directly process TCP packets in hardware. Because TCP is a stream-oriented protocol that operates above an unreliable Datagram Network, there are complexities in reconstructing the underlying data flow. High-speed Network intrusion detection and prevention systems guard against several types of threats (see the "Related work" sidebar). When used in backbone Networks, these content-scanning systems must not inhibit Network throughput. Gilder's law predicts that the need for bandwidth will grow at least three times as fast as computing power. 1 As the gap between Network bandwidth and computing power widens, improved microelectronic architectures are needed to monitor and filter Network traffic without limiting throughput. To address these issues, we've designed a hardwarebased TCP/IP content-processing system that supports content scanning and flow blocking for millions of flows at gigabit line rates. TCP splitter The TCP splitter 2 technology was previously developed to monitor TCP data streams, sending a consistent byte stream of data to a client application for every TCP data flow passing through the circuit. The TCP splitter accomplishes this task by tracking the TCP sequence number along with the current flow state. Out-of-order packets are dropped to ensure that the client application receives the full TCP data stream without the need for large stream reassembly buffers. Dropping packets to maintain an ordered packet flow throughout the Network can adversely affect the Network's overall throughput. Jaiswal et al. analyzed out-of-sequence packets in tier-1 IP backbones. 3 They noted that approximately 95 percent of all TCP packets on Internet backbone links were in proper sequence. Network-induced packet reordering accounted for a small fraction of out-of-sequence packets, with most resulting from retransmissions due to data loss. More than 86 percent of all observed TCP flows contained no out-of-sequence packets. Earlier A suite of layered protocol wrappers processes Network and transport protocols in reconfigurable hardware. 5 The wrappers include an asynchronous transfer mode cell wrapper, an ATM adaptation layer type 5 (AAL5) frame wrapper, and an IP wrapper. 63 JANUARY-FEBRUARY 2004 By their very nature, intrusion detection systems (IDSs) and intrusion prevention systems must perform deep packet inspections on all traffic traversing the Network. This task is difficult when data rates are high and the system must track many simultaneous flows. Software IDS solutions, such as Snort, 1 work well only when aggregate bandwidth rates are low. Implementing an external monitor that can track a Transmission Control Protocol (TCP) connection state is difficult. Bhargavan et al. discuss the complexities associated with tracking various properties of a protocol using language recognition techniques. 2 General solutions to this problem can vary greatly. Monitoring and reassembling flows-tasks required for an IDS-become even more complicated by direct attempts to evade detection. Handley et al. expound on this topic. 3 One such technique for evading detection would be to modify an end-system protocol stack such that TCP retransmissions contain different content than original data transmissions. A recently developed passive monitoring system can capture and accurately time stamp packets at data rates of up to OC-48 (2.5 Gbps). 4 Highly accurate time stamps correlate data captured by multiple monitoring systems in a wide area Network. Optical splitters deliver a copy of the Network traffic to the monitoring station. The system stores the first 44 bytes of each packet and the analysis of the captured data occurs out of band. Network World Fusion tested six commercially available gigabit IDSs by sending 28 attacks along with 970 Mbps of background traffic. 5 After system tuning, only one system detected all 28 of their attacks while processing data on a gigabit Ethernet link. In general, software-based systems are incapable of matching regular expressions at gigabit rates. Previous work also exists in the area of string matching on field-programmable gate arrays. Sidhu and Prasanna were primarily concerned with minimizing the time and space required to construct nondeterministic finite automatons (NFAs). 6 They run their NFA construction algorithm in hardware instead of software. To perform string matching, Hutchings, Franklin, and Carver followed with an analysis of this approach for the large set of regular expressions found in a Snort database. Related work These wrappers provide lower-layer protocol processing for our TCP architecture. Content-scanning engine The content-scanning engine can scan the payload of packets for a set of regular expressions. 6 To do so, this hardware module employs a set of deterministic finite automata, each searching in parallel for one of the targeted regular expressions. Upon matching a Network data packet's payload with any of these regular expressions, the content-scanning engine can either let the data pass or drop the packet. This engine can also send an alert message to a log server when it detects a match in a packet. The alert message contains the matching packet's source and destination addresses along with a list of regular expressions found in the packet. The content-scanning engine, when implemented with four parallel search engines, provides a throughput of 2.5 Gbps. TCP-based content-scanning engine The new TCP-based content-scanning engine integrates and extends the capabilities of the TCP splitter and the old content-scanning engine. Design requirements A hashing algorithm that produces an even distribution across all hash buckets is important to the circuit's overall efficiency. We performed initial analysis of the flow-classification hashing algorithm for this system against packet traces available from the National Laboratory for Applied Network Research. With 26,452 flow identifiers hashed into a table of 8 million entries, a hash collision occurred in less than 0.3 percent of the flows. We've added features to the TCP processing circuit to support the following services: • Flow blocking. This will let the system block a flow at a particular byte offset within the TCP data stream. • Flow unblocking. The system can reenable a previously disabled flow so that data for a particular flow can once again pass through the circuit. • Flow termination. This mechanism will shut down a selected flow by generating a TCP FIN (finish) packet. • Flow modification. We will provide the ability to sanitize selected data contained within a TCP stream. Flow state store To support millions of TCP flows, the TCP processing engine uses one 512-Mbyte, offchip, synchronous dynamic random access memory (SDRAM) module. The interface to this module has a 64-bit-wide data path and supports a burst length of eight memory operations. By matching our per-flow memory requirements with the burst width of the memory module, we can optimize use of memory bandwidth. Storing 64 bytes of state information for each flow lets the memory interface match the amount of per-flow state information with the amount of data in a burst transfer to memory. This configuration supports 8 million simultaneous flows. Assuming $50 as the purchase price for a 512-Mbyte SDRAM memory module, the cost to store context for 8 million flows is only 0.000625 cents per flow, or 1,600 flows per penny. Of the 64 bytes of data stored for each flow, the TCP processing engine uses 32 bytes to maintain flow state and memory management overhead. The additional 32 bytes of state store for each flow can hold the application-specific data for each flow context. The hash algorithm contained within the TCP processing engine hashes the source and destination IP addresses and TCP ports into a 22-bit value. This hash value serves as a direct index to the first entry in a hash bucket. The record's format lets the hash Figure 2. Flow state record for one entry, for a given flow. Each box represents 32 bits; two adjacent boxes collectively represent 64 bits, which the state store manager can read from SDRAM in one clock cycle. For example, the hash value is located at bits 31 to 0, and the flow ID at bits 63 to 32, of the first memory location. Because the memory device supports burst read and write operations, the state store manager retrieves all data (8 rows, 64 bits each) in a single memory operation. The state store manager maintains one of these records for every flow that the content-scanning engine processes. tion for multiple flows that hash to the same bucket. To ensure that the system can maintain real-time behavior, we constrain the number of link traversals to a constant value. The state store manager can cache state information using on-chip block RAM memory. This provides faster access to state information for the most recently accessed flows. A writeback cache design improves performance. Stream-based content scanning The content-scanning engine processes TCP data streams from the TCP processing engine, which lets the content-scanning engine match data that spans across multiple packets. The content-scanning engine must perform regular-expression-based scans on many active TCP flows. To process interleaved flows, it must perform a context switch to save and restore perflow context information. When a packet reaches the content-scanning engine through some flow, the content-scanning engine must restore the last known matching state for that flow before starting the matching operation on that packet. When it has finished processing the packet, the content-scanning engine must save the flow's new matching state by using the TCP processing circuit's state store resources. Each content-scanning engine processes data one byte at a time. The TCP processing circuit uses a 4-byte-wide data path, so the contentscanning engine must perform a 4-to-1 slowdown when processing packet data. Having four content-scanning engines in parallel and processing four flows concurrently, as TCP processing The architecture receives data through the IP wrappers. 3 As the left side of • a first-in, first-out (FIFO) frame buffer, which stores the packet; • a checksum engine, which validates the TCP checksum; and • a flow classifier, which computes a hash value for the packet. The flow classification hash value is passed to the state store manager, which retrieves the state information associated with the particular flow. Results are written to a control FIFO buffer, and the state store is updated with the current state of the flow. An output state machine reads data from the frame and control FIFO buffers and passes it to the packet-routing engine. Most traffic flows through the content-scanning engines, which scan the data. Packet retransmissions bypass these engines and go directly to the flowblocking module. Data returning from the content-scanning engines also goes to the flow-blocking module. This stage updates the per-flow state store with the latest application-specific state information. If a content-scanning engine has enabled blocking for a flow, the flow-blocking module now enforces it. This module compares the packet's sequence number with those sequence numbers for which flow blocking should take place. If the packet meets the blocking criteria, the flow-blocking module drops it from the Network. Any remaining packets go to the outbound protocol wrapper. The state store manager is responsible for processing requests to read and write flow state records. It also handles all interactions with SDRAM memory, and it caches recently accessed flow state information. The SDRAM controller exposes three memory-access interfaces: a read-write, a write only, and a read only. The controller prioritizes requests in that order, with the read-write interface having the highest priority. In a worst-case scenario in which there's no more than one entry per hash bucket, each packet requires a total of two read and two write operations to the SDRAM: • an 8-word read to retrieve flow state, • an 8-word write to initialize a new flow record, • a 4-word read to retrieve flow-blocking information, and • a 5-word write to update application-specific flow state and blocking information. Memory accesses aren't necessary for TCP acknowledgment packets containing no data. Analysis indicates that all read and write operations can occur during packet processing if the average TCP packet contains more than 120 bytes of data. If the TCP packets contain less than this amount, there might not be enough time to complete all memory operations during packet processing. In that case