By Topic

Virtualization Technology for TCP/IP Offload Engine

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

5 Author(s)
En-Hao Chang ; Dept. of Electr. Eng., Nat. Cheng Kung Univ., Tainan, Taiwan ; Chen-Chieh Wang ; Chien-Te Liu ; Kuan-Chung Chen
more authors

Network I/O virtualization plays an important role in cloud computing. This paper addresses the system-wide virtualization issues of TCP/IP Offload Engine (TOE) and presents the architectural designs. We identify three critical factors that affect the performance of a TOE: I/O virtualization architectures, quality of service (QoS), and virtual machine monitor (VMM) scheduler. In our device emulation based TOE, the VMM manages the socket connections in the TOE directly and thus can eliminate packet copy and demultiplexing overheads as appeared in the virtualization of a layer 2 network card. To further reduce hypervisor intervention, the direct I/O access architecture provides the per VM-based physical control interface that helps removing most of the VMM interventions. The direct I/O access architecture out-performs the device emulation architecture as large as 30 percent, or achieves 80 percent of the native 10 Gbit/s TOE system. To continue serving the TOE commands for a VM, no matter the VM is idle or switched out by the VMM, we decouple the TOE I/O command dispatcher from the VMM scheduler. We found that a VMM scheduler with preemptive I/O scheduling and a programmable I/O command dispatcher with deficit weighted round robin (DWRR) policy are able to ensure service fairness and at the same time maximize the TOE utilization.

Published in:

Cloud Computing, IEEE Transactions on  (Volume:2 ,  Issue: 2 )