Skip to Main Content
The availability of multi-core processors and programmable NICs (Network Interface Cards), such as TOEs (TCP/IP Offloading Engines), provides new opportunities for designing efficient network interfaces to cope with the gap between the improvement rates of link bandwidths and microprocessor performance. This gap poses important challenges related with the high computational requirements associated to the traffic volumes and wider functionality that the network interface has to support. An opportunity to reach these goals comes from the exploitation of the parallelism in the communication path by distributing the protocol processing work across processors available in the computer, i.e. multi-core microprocessors and programmable NICs. Thus, alternatives such as offloading and onloading try to release host CPU cycles by this approach. Nevertheless, whereas onloading uses another general-purpose processor, either included in a multi-core microprocessor or in a symmetric multiprocessor (SMP), offloading takes advantage of processors in programmable network interface cards (NICs). Some experimental results demonstrate that the relative improvement on peak throughput offered by offloading and onloading depends on the rate of application workload to communication overhead, the message sizes, and on the characteristics of system architecture, more specifically the bandwidth of the buses and the way the NIC is connected to the system processor and memory. Thus, in this paper we propose a network interface that takes the advantages of both offloading and onloading approaches while avoids their respective drawbacks. The performance analyses done by using a full-system simulator, shows that, in the benchmarks and application used for the experiments, our hybrid interface improves the latency and bandwidth behavior of the onloading and offloading approaches.