Longest Prefix Match

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 432 Experts worldwide ranked by ideXlab platform

Sergey I Nikolenko - One of the best experts on this subject based on the ideXlab platform.

  • general ternary bit strings on commodity Longest Prefix Match infrastructures
    International Conference on Network Protocols, 2017
    Co-Authors: Pavel Chuprikov, Kirill Kogan, Sergey I Nikolenko
    Abstract:

    Ternary Content-Addressable Memory (tcam) is a powerful tool to represent network services with line-rate lookup time. There are various software-based approaches to represent multi-field packet classifiers. Unfortunately, all of them either require exponential memory or apply additional constraints on field representations (e.g, Prefixes or exact values) to have line-rate lookup time. In this work, we propose alternatives to tcam and introduce a novel approach to represent packet classifiers based on ternary bit strings (without constraining field representation) on commodity Longest-Prefix-Match (lpm) infrastructures. These representations are built on a novel property, Prefix reorderability, that defines how to transform an ordered set of ternary bit strings to Prefixes with lpm priorities in linear memory. Our results are supported by evaluations on large-scale packet classifiers with real parameters from ClassBench; moreover, we have developed a prototype in P4 to support these types of transformations.

  • ICNP - General ternary bit strings on commodity Longest-Prefix-Match infrastructures
    2017 IEEE 25th International Conference on Network Protocols (ICNP), 2017
    Co-Authors: Pavel Chuprikov, Kirill Kogan, Sergey I Nikolenko
    Abstract:

    Ternary Content-Addressable Memory (tcam) is a powerful tool to represent network services with line-rate lookup time. There are various software-based approaches to represent multi-field packet classifiers. Unfortunately, all of them either require exponential memory or apply additional constraints on field representations (e.g, Prefixes or exact values) to have line-rate lookup time. In this work, we propose alternatives to tcam and introduce a novel approach to represent packet classifiers based on ternary bit strings (without constraining field representation) on commodity Longest-Prefix-Match (lpm) infrastructures. These representations are built on a novel property, Prefix reorderability, that defines how to transform an ordered set of ternary bit strings to Prefixes with lpm priorities in linear memory. Our results are supported by evaluations on large-scale packet classifiers with real parameters from ClassBench; moreover, we have developed a prototype in P4 to support these types of transformations.

Xinan Tang - One of the best experts on this subject based on the ideXlab platform.

  • PPOPP - High-performance IPv6 forwarding algorithm for multi-core and multithreaded network processor
    Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP '06, 2006
    Co-Authors: Xinan Tang, Bei Hua
    Abstract:

    IP forwarding is one of the main bottlenecks in Internet backbone routers, as it requires performing the Longest-Prefix Match at 10Gbps speed or higher. IPv6 forwarding further exacerbates the situation because its search space is quadrupled. We propose a high-performance IPv6 forwarding algorithm TrieC, and implement it efficiently on the Intel IXP2800 network processor (NPU). Programming the multi-core and multithreaded NPU is a daunting task. We study the interaction between the parallel algorithm design and the architecture mapping to facilitate efficient algorithm implementation. We experiment with an architecture-aware design principle to guarantee the high performance of the resulting algorithm.This paper investigates the main software design issues that have dramatic performance impacts on any NPU based implementation: memory space reduction, instruction selection, data allocation, task partitioning, latency hiding, and thread synchronization. In the paper, we provide insight on how to design an NPU-aware algorithm for high-performance networking applications. Based on the detailed performance analysis of the TrieC algorithm, we provide guidance on developing high-performance networking applications for the multi-core and multithreaded architecture.

  • ICESS - TrieC: a high-speed IPv6 lookup with fast updates using network processor
    Embedded Software and Systems, 2005
    Co-Authors: Bei Hua, Xinan Tang
    Abstract:

    Address lookup is one of the main bottlenecks in Internet backbone routers, as it requires the router to perform a Longest-Prefix-Match when searching the routing table for a next hop. Ever-increasing Internet bandwidth, continuously growing Prefix table size and inevitable migration to IPv6 address architecture further exacerbate this situation. In recent years, a variety of high-speed address lookup algorithms have been proposed, however most of them are inappropriate to IPv6 lookup. This paper proposes a high-speed IPv6 lookup algorithm TrieC, which achieves the goals of high-speed address lookup, fast incremental Prefix updates, high scalability and reasonable memory requirement by taking great advantage of the network processor architecture. Performance of TrieC is carefully evaluated with several IPv6 routing tables of different sizes and different Prefix length distributions on Intel IXP2800 network processor(NPU). Simulation shows that TrieC can support IPv6 lookup at OC-192 line rate. Furthermore, if TrieC is pipelined in hardware, it can achieve one IPv6 lookup per memory access.

  • TrieC : A high-speed IPv6 lookup with fast updates using network processor
    Lecture Notes in Computer Science, 2005
    Co-Authors: Bei Hua, Xinan Tang
    Abstract:

    Address lookup is one of the main bottlenecks in Internet backbone routers, as it requires the router to perform a Longest-Prefix-Match when searching the routing table for a next hop. Ever-increasing Internet bandwidth, continuously growing Prefix table size and inevitable migration to IPv6 address architecture further exacerbate this situation. In recent years, a variety of high-speed address lookup algorithms have been proposed, however most of them are inappropriate to IPv6 lookup. This paper proposes a high-speed IPv6 lookup algorithm TrieC, which achieves the goals of high-speed address lookup, fast incremental Prefix updates, high scalability and reasonable memory requirement by taking great advantage of the network processor architecture. Performance of TrieC is carefully evaluated with several IPv6 routing tables of different sizes and different Prefix length distributions on Intel IXP2800 network processor(NPU). Simulation shows that TrieC can support IPv6 lookup at OC-192 line rate. Furthermore, if TrieC is pipelined in hardware, it can achieve one IPv6 lookup per memory access.

Zhiyong Liang - One of the best experts on this subject based on the ideXlab platform.

  • a scalable parallel lookup framework avoiding Longest Prefix Match
    International Conference on Information Networking, 2004
    Co-Authors: Zhiyong Liang
    Abstract:

    Fast routing lookups are crucial for the forwarding performance of IP routers. Longest Prefix Match makes routing lookups difficult. This paper proposes a method to partition a routing table. The method can divide all Prefixes in a routing table into several Prefix sets where Prefixes don’t overlap. Based on the method, this paper also presents a common parallel lookup framework(PRLF) that reduces ”Longest Prefix Matching” in all the Prefixes to ”only Prefix Matching” in several Prefix sets. The framework can effectively simplify the design of lookup algorithms and improve lookup performance. The framework is suitable for most lookup algorithms. For simple binary search algorithm, the framework can reach log 22N/B lookup complexity (where N is Prefix number in a routing table and B is an integer bigger than 4). Also, the framework can scale to IPv6 easily.

  • ICOIN - A Scalable Parallel Lookup Framework Avoiding Longest Prefix Match
    Lecture Notes in Computer Science, 2004
    Co-Authors: Zhiyong Liang
    Abstract:

    Fast routing lookups are crucial for the forwarding performance of IP routers. Longest Prefix Match makes routing lookups difficult. This paper proposes a method to partition a routing table. The method can divide all Prefixes in a routing table into several Prefix sets where Prefixes don’t overlap. Based on the method, this paper also presents a common parallel lookup framework(PRLF) that reduces ”Longest Prefix Matching” in all the Prefixes to ”only Prefix Matching” in several Prefix sets. The framework can effectively simplify the design of lookup algorithms and improve lookup performance. The framework is suitable for most lookup algorithms. For simple binary search algorithm, the framework can reach log 22N/B lookup complexity (where N is Prefix number in a routing table and B is an integer bigger than 4). Also, the framework can scale to IPv6 easily.

Bei Hua - One of the best experts on this subject based on the ideXlab platform.

  • PPOPP - High-performance IPv6 forwarding algorithm for multi-core and multithreaded network processor
    Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming - PPoPP '06, 2006
    Co-Authors: Xinan Tang, Bei Hua
    Abstract:

    IP forwarding is one of the main bottlenecks in Internet backbone routers, as it requires performing the Longest-Prefix Match at 10Gbps speed or higher. IPv6 forwarding further exacerbates the situation because its search space is quadrupled. We propose a high-performance IPv6 forwarding algorithm TrieC, and implement it efficiently on the Intel IXP2800 network processor (NPU). Programming the multi-core and multithreaded NPU is a daunting task. We study the interaction between the parallel algorithm design and the architecture mapping to facilitate efficient algorithm implementation. We experiment with an architecture-aware design principle to guarantee the high performance of the resulting algorithm.This paper investigates the main software design issues that have dramatic performance impacts on any NPU based implementation: memory space reduction, instruction selection, data allocation, task partitioning, latency hiding, and thread synchronization. In the paper, we provide insight on how to design an NPU-aware algorithm for high-performance networking applications. Based on the detailed performance analysis of the TrieC algorithm, we provide guidance on developing high-performance networking applications for the multi-core and multithreaded architecture.

  • ICESS - TrieC: a high-speed IPv6 lookup with fast updates using network processor
    Embedded Software and Systems, 2005
    Co-Authors: Bei Hua, Xinan Tang
    Abstract:

    Address lookup is one of the main bottlenecks in Internet backbone routers, as it requires the router to perform a Longest-Prefix-Match when searching the routing table for a next hop. Ever-increasing Internet bandwidth, continuously growing Prefix table size and inevitable migration to IPv6 address architecture further exacerbate this situation. In recent years, a variety of high-speed address lookup algorithms have been proposed, however most of them are inappropriate to IPv6 lookup. This paper proposes a high-speed IPv6 lookup algorithm TrieC, which achieves the goals of high-speed address lookup, fast incremental Prefix updates, high scalability and reasonable memory requirement by taking great advantage of the network processor architecture. Performance of TrieC is carefully evaluated with several IPv6 routing tables of different sizes and different Prefix length distributions on Intel IXP2800 network processor(NPU). Simulation shows that TrieC can support IPv6 lookup at OC-192 line rate. Furthermore, if TrieC is pipelined in hardware, it can achieve one IPv6 lookup per memory access.

  • TrieC : A high-speed IPv6 lookup with fast updates using network processor
    Lecture Notes in Computer Science, 2005
    Co-Authors: Bei Hua, Xinan Tang
    Abstract:

    Address lookup is one of the main bottlenecks in Internet backbone routers, as it requires the router to perform a Longest-Prefix-Match when searching the routing table for a next hop. Ever-increasing Internet bandwidth, continuously growing Prefix table size and inevitable migration to IPv6 address architecture further exacerbate this situation. In recent years, a variety of high-speed address lookup algorithms have been proposed, however most of them are inappropriate to IPv6 lookup. This paper proposes a high-speed IPv6 lookup algorithm TrieC, which achieves the goals of high-speed address lookup, fast incremental Prefix updates, high scalability and reasonable memory requirement by taking great advantage of the network processor architecture. Performance of TrieC is carefully evaluated with several IPv6 routing tables of different sizes and different Prefix length distributions on Intel IXP2800 network processor(NPU). Simulation shows that TrieC can support IPv6 lookup at OC-192 line rate. Furthermore, if TrieC is pipelined in hardware, it can achieve one IPv6 lookup per memory access.

Pavel Chuprikov - One of the best experts on this subject based on the ideXlab platform.

  • general ternary bit strings on commodity Longest Prefix Match infrastructures
    International Conference on Network Protocols, 2017
    Co-Authors: Pavel Chuprikov, Kirill Kogan, Sergey I Nikolenko
    Abstract:

    Ternary Content-Addressable Memory (tcam) is a powerful tool to represent network services with line-rate lookup time. There are various software-based approaches to represent multi-field packet classifiers. Unfortunately, all of them either require exponential memory or apply additional constraints on field representations (e.g, Prefixes or exact values) to have line-rate lookup time. In this work, we propose alternatives to tcam and introduce a novel approach to represent packet classifiers based on ternary bit strings (without constraining field representation) on commodity Longest-Prefix-Match (lpm) infrastructures. These representations are built on a novel property, Prefix reorderability, that defines how to transform an ordered set of ternary bit strings to Prefixes with lpm priorities in linear memory. Our results are supported by evaluations on large-scale packet classifiers with real parameters from ClassBench; moreover, we have developed a prototype in P4 to support these types of transformations.

  • ICNP - General ternary bit strings on commodity Longest-Prefix-Match infrastructures
    2017 IEEE 25th International Conference on Network Protocols (ICNP), 2017
    Co-Authors: Pavel Chuprikov, Kirill Kogan, Sergey I Nikolenko
    Abstract:

    Ternary Content-Addressable Memory (tcam) is a powerful tool to represent network services with line-rate lookup time. There are various software-based approaches to represent multi-field packet classifiers. Unfortunately, all of them either require exponential memory or apply additional constraints on field representations (e.g, Prefixes or exact values) to have line-rate lookup time. In this work, we propose alternatives to tcam and introduce a novel approach to represent packet classifiers based on ternary bit strings (without constraining field representation) on commodity Longest-Prefix-Match (lpm) infrastructures. These representations are built on a novel property, Prefix reorderability, that defines how to transform an ordered set of ternary bit strings to Prefixes with lpm priorities in linear memory. Our results are supported by evaluations on large-scale packet classifiers with real parameters from ClassBench; moreover, we have developed a prototype in P4 to support these types of transformations.