By Topic

Accelerating ODE-Based Simulation of General and Heterogeneous Biophysical Models Using a GPU

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

7 Author(s)
Okuyama, T. ; Dept. of Comput. Sci., Osaka Univ., Suita, Japan ; Okita, M. ; Abe, T. ; Asai, Y.
more authors

Flint is a simulator that numerically integrates heterogeneous biophysical models described by a large set of ordinary differential equations. It uses an internal bytecode representation of simulation-related expressions to handle various biophysical models built for general purposes. We propose two acceleration methods for Flint using a graphics processing unit (GPU). The first method interprets multiple bytecodes in parallel on the GPU. It automatically parallelizes the simulation using a level scheduling algorithm. We implement an interpreter of the Flint bytecode that is suited for running on the GPU, which reduces both the number of memory accesses and divergent branches to achieve higher performance. The second method translates a model into a source code for both the CPU and the GPU through the internal bytecode, which speeds up the compilation of the generated source codes, because the code size is diminished because of bytecode unification. With large models such that tens of thousands or more expressions can be evaluated simultaneously, the translated code running on the GPU achieves computational performance of up to 2.7 higher than that running on a CPU. Otherwise, with small models, the CPU is faster than the GPU. Therefore, the translated code dynamically determines on which to run either the CPU or the GPU by profiling initial few iterations of the simulation.

Published in:

Parallel and Distributed Systems, IEEE Transactions on  (Volume:25 ,  Issue: 8 )