Skip to Main Content
Compared with rapid technology advancements in multicore processors and rapid changes from computing-intensive to highly data-intensive applications, operating systems have been evolved very slowly for several decades. Application users are facing to two major challenges in today's computing environment. On the top level of the system hierarchy, private and shared caches are equipped for many cores to access concurrently, inevitably causing access conflicts to degrade execution performance. On the bottom level, the performance bottleneck of "memory wall" has been shifted to "disk wall" that is a serious bottleneck for many data- intensive applications. Since processor caches and disk storage are not in the major scope of operating system management, and their increasingly complex operations are not transparent to application users, the above mentioned performance issues have not been effectively addressed at any level of computer systems. We have made a continuous effort to enhance operating systems with two objectives: (1) to well utilize rich but complex resources of multicore processors and (2) to access disk data as fast as possible. At the multicore processor level, we are developing new resource allocation management to improve the effective caching capacity per core and/or per thread, and to minimize congestion in off-chip memory accesses by coordinating memory bandwidth sharing. At the storage level, we enable operating systems to effectively exploit "sequential locality" - for the same amount of data, sequential accesses are several orders of magnitude faster than random accesses in disks. In this talk, related research issues and challenges will be overviewed, and preliminary results will be presented.