Skip to Main Content
This paper describes an implementation of a region-based memory manager that performs the allocation and deallocation in constant-time. Also, additional functionality for generating arrays which can grow arbitrarily has been implemented. Thus, external fragmentation is overcomed, and the appearance of memory leaks has been considerably reduced. All these features make this allocator particularly useful for computer vision applications. The main goal has been to replace the general purpose allocator on some critical places in order to remove fragmentation and improve performance. The use of Regions also reduces programmer burden. The main disadvantage of this method is that it leads to a higher memory consumption peaks than general-purpose allocators. System developers need to establish an upper bound for the maximum memory that can be allocated at once. In this paper, the performance of our approach has been compared against an architecture-optimized general purpose memory allocator in a real-time vision application.