By Topic

Research Issues and Challenges to Advance System Software  for Multicore Processors and Data-Intensive Applications

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Xiaodong Zhang ; Dept. of Comput. Sci. & Eng., Ohio State Univ., Columbus, OH

Compared with rapid technology advancements in multicore processors and rapid changes from computing-intensive to highly data-intensive applications, operating systems have been evolved very slowly for several decades. Application users are facing to two major challenges in today's computing environment. On the top level of the system hierarchy, private and shared caches are equipped for many cores to access concurrently, inevitably causing access conflicts to degrade execution performance. On the bottom level, the performance bottleneck of "memory wall" has been shifted to "disk wall" that is a serious bottleneck for many data- intensive applications. Since processor caches and disk storage are not in the major scope of operating system management, and their increasingly complex operations are not transparent to application users, the above mentioned performance issues have not been effectively addressed at any level of computer systems. We have made a continuous effort to enhance operating systems with two objectives: (1) to well utilize rich but complex resources of multicore processors and (2) to access disk data as fast as possible. At the multicore processor level, we are developing new resource allocation management to improve the effective caching capacity per core and/or per thread, and to minimize congestion in off-chip memory accesses by coordinating memory bandwidth sharing. At the storage level, we enable operating systems to effectively exploit "sequential locality" - for the same amount of data, sequential accesses are several orders of magnitude faster than random accesses in disks. In this talk, related research issues and challenges will be overviewed, and preliminary results will be presented.

Published in:

Embedded and Ubiquitous Computing, 2008. EUC '08. IEEE/IFIP International Conference on  (Volume:1 )

Date of Conference:

17-20 Dec. 2008