By Topic

How to avoid making the same Mistakes all over again: What the parallel-processing community has (failed) to offer the multi/many-core generation

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
1 Author(s)
Traff, J.L. ; NEC Labs. Eur., NEC Eur. Ltd., St. Augustin

We observe that in the past decade parallel processing and parallel algorithmics have disappeared from mainstream computer-science curricula and moved either into advanced graduate courses or into the application domains. This is well illustrated by current textbook availability. The influential book by Cormen, Leiserson and Rivest (1990) in its first edition had a substantial chapter on PRAM algorithmics that was dropped from the second edition (2001), the parallel algorithms book by Jala (1992) is no longer in print, and recent algorithms texts by Kleinberg and Tardos (2005), or Dasgupta et al. (2007) do not touch on parallelism at all. In the past decade it was well possible to complete an advanced computer science degree without exposure to parallel processing, and in particular parallel algorithmics.It is timely for the parallel-processing community to take stock: What does the community have to offer the upcoming generation that will have to deal with parallelism for a much broader range of applications? What are the fundamental paradigms and techniques of the past? How can these be most effectively conveyed, and to whom? Which were the mistakes and wrong turns of the past? How can repetition be avoided? Which problems remain unsolved, and what are the major, new challenges?

Published in:

Parallel and Distributed Processing, 2008. IPDPS 2008. IEEE International Symposium on

Date of Conference:

14-18 April 2008