Skip to Main Content
We observe that in the past decade parallel processing and parallel algorithmics have disappeared from mainstream computer-science curricula and moved either into advanced graduate courses or into the application domains. This is well illustrated by current textbook availability. The influential book by Cormen, Leiserson and Rivest (1990) in its first edition had a substantial chapter on PRAM algorithmics that was dropped from the second edition (2001), the parallel algorithms book by Jala (1992) is no longer in print, and recent algorithms texts by Kleinberg and Tardos (2005), or Dasgupta et al. (2007) do not touch on parallelism at all. In the past decade it was well possible to complete an advanced computer science degree without exposure to parallel processing, and in particular parallel algorithmics.It is timely for the parallel-processing community to take stock: What does the community have to offer the upcoming generation that will have to deal with parallelism for a much broader range of applications? What are the fundamental paradigms and techniques of the past? How can these be most effectively conveyed, and to whom? Which were the mistakes and wrong turns of the past? How can repetition be avoided? Which problems remain unsolved, and what are the major, new challenges?