Modern multi-core processors are predominant in improving performance of parallel applications. This paper discusses a triple-based multi-core architecture which provides native support for object-oriented methodology and applications in hardware level. The model explicitly represents objects and supports messaging-based communication, which maps well to the standard style of interaction in object oriented languages. However, the Memory Wall is still the bottleneck which should be resolved to decrease the disparity between how fast a CPU can operate on data and how fast it can get data. A hierarchy shared memory system (HSM) working with the partially-inclusive cache mapping policy is proposed. And a new object management model is presented, which use object table and recycle stack scheme to supports explicit dynamic object management. Our cache design presents an innovative solution to handling the costs of cache coherence by allowing applications to control the amount of sharing between cores. Experimental analysis based on comparisons between our objects management and other common link structured object organization methods shows that our method is predominant in spatial and temporal aspects on memory parallel access efficiency and costs less storage space to organize objects.