Skip to Main Content
Multi-cores are here to stay, whether we like it or not. With a quadrupling of the core count every three years, chips with hundreds of processor cores are projected in the next decade. The question is, how much of their computational power can be unleashed, what it will take to unleash it, and how best can research accelerate progress? Several decades of research in multiprocessing has not really made the case. On the other hand, now that coarse-grain parallelism seems to be our only hope and the computing landscape is arguably different, opportunities may arise. The following cross-cutting issues will be debated in this panel with the hope of distilling new avenues for parallelism exploitation: Is the computing landscape (technology, applications, and market) today sufficiently different to exploit multiprocessors from what it was in the past? If yes, in what sense? If not, why? Do we need more research in multiprocessing given past work? If yes, what are the biggest challenges? If not, state the reasons. Will progress in software/architecture make it possible to make sequential languages prevail? If yes, what are the top priorities in research to make that happen? If not, what are the visions for a parallel-language paradigm shift and what are the major challenges in software/architecture research to accelerate uptake in the programming community? Would multi-disciplinary research (across the applications, algorithms, software, and architecture areas) be a good way to accelerate developments? Then, what areas should interact more closely and with what goals in mind?