By Topic

Parallel architectures for processing high speed network signaling protocols

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

3 Author(s)
D. Ghosal ; Bellcore, Red Bank, NJ, USA ; T. V. Lakshman ; Yennun Huang

We study the effectiveness of different parallel architectures for achieving the high throughputs and low latencies needed in processing signaling protocols for high speed networks. A key performance issue is the trade off between the load balancing gains and the call record management overhead. Arranging processors in large groups potentially yields higher load balancing gains but also incurs higher overhead in maintaining consistency among the replicated copies of the call records. We study this tradeoff and its impact on the design of protocol processing systems for two generic classes of parallel architectures, namely, shared memory and distributed memory architectures. In shared memory architectures, maintaining a common message queue in the shared memory can provide the maximal load balancing gains. We show, however, in order to optimize performance it is necessary to organize the processors in small groups since large groups result in higher call record management overhead. In distributed memory architectures with each processor maintaining its own message queue there is no inherent provision for load balancing. Based on a detailed simulation analysis we show that organizing the processors into small groups and using a simple distributed load balancing scheme yields modest performance gains even after call record management overheads are taken into account. We find that the common message queue architecture outperforms the distributed architecture in terms of lower response time due to its improved load balancing capability. Finally, we do a fault-tolerance analysis with respect to the call-record data structure. Using a simple failure recovery model of the processors and the local memory, we show that in the case of shared memory architecture, the availability is also optimized when processors are organized in small groups. This is because when comparing architectures the higher call record management overhead incurred for larger group sizes must be accounted for as system unavailability

Published in:

IEEE/ACM Transactions on Networking  (Volume:3 ,  Issue: 6 )