Cart (Loading....) | Create Account
Close category search window
 

HiPPAI: High Performance Portable Accelerator Interface for SoCs

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)

Specialized hardware accelerators are enabling today's System on Chip (SoC) platforms to target various applications. In this paper we show that as these SoCs evolve in complexity and usage, the programming models for such platforms need to evolve beyond the traditional driver oriented architecture. Using a test set up that employs a programmable FPGA based accelerator to implement one of the critical computation functions of a Mobile Augmented Reality based workload, we describe the performance drawbacks that a conventional programming model brings to compute environments employing hardware accelerators. We show that these performance issues become more critical as the interface latencies continue to improve over time with better hardware integration and efficient interconnect technologies. Under these usage scenarios, we show with measurements that the software overheads enforced by the current programming model, like those associated with system calls, memory copy and memory address translations account for a major part of the performance overheads. We then propose a novel High Performance Portable Accelerator Interface (HiPPAI) for SoC platforms using hardware accelerators to reduce the software overheads mentioned above. In addition, we position the new programming interface to allow for function portability between software and hardware function accelerators to reduce the application development effort. Our proposed model relies on two major building blocks for performance improvement. A uniform virtual memory addressing model based on hardware IOMMU support and direct user mode access to accelerators. We demonstrate how these enhancements reduce the overheads of system calls and address translations at the user/kernel boundary in traditional software stacks and enable function portability.

Published in:

High Performance Computing (HiPC), 2009 International Conference on

Date of Conference:

16-19 Dec. 2009

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.