This paper presents alternatives and performance results obtained by analyzing parallelization on a cluster of multicore nodes. The ultimate goal is to show if both shared and distributed memory parallel processing models need to be taken into account independently, or if one affects the other and both must be considered simultaneosly. The application used as a testbed is classical in the context of high performance computing: matrix multiplication. Results are shown in terms of the conditions under which performance is optimized and where to focus the parallelization efforts on clusters with nodes with multiple cores, based on experiments combining both kinds of parallel models. In any case, all processing units should be effectively used in order to optimize the performance of parallel applications.