Skip to Main Content
The gap between high throughput demand of Internet traffic and low speed capacity of a router's interface has become a bottleneck for packet forwarding. One way to close the gap is to employ a parallel mechanism, where the route lookups of multiple packets are processed simultaneously, yielding a substantial improvement in the system's throughput. This paper proposes a new pipelined trie-based routing architecture with multiple memory blocks, in which a routing table is organized as a prefix trie and the latter is further decomposed into a main trie and multiple subtries containing the lower-level and higher-level nodes, respectively. Further, the main trie is converted into an index table and the subtries are evenly distributed into all the memory blocks. A storage management technique called random duplicate allocation (RDA) is employed to balance the storage demands among all the memory blocks. Specifically, for each subtrie, the root node is stored in a randomly selected memory block, and the descendant nodes are stored in the subsequent memory blocks level by level, in a circular manner of one block for a level. The results of computer simulation experiments indicate that the routing system's aggregate throughput grows almost linearly proportional to the number of memory blocks.