Skip to Main Content
Efficient embedded computing requires extended compiler awareness of the underlying hardware platform: execution time and energy consumption estimates should guide optimization. Conventional compilers employ rough (energy-unaware) estimates for fast decision making. Real-time compilers quickly determine bounds for WCET, but ignore energy. Embedded compilers accurately estimate average time/energy but require time-consuming profiling. We propose a novel estimation method based on energy-aware Abstract Interpretation from cache configuration and target technology. Our estimates exhibit derivatives that are as accurate as those obtained by profiling, but are computed at least 1000 times faster, being suitable for driving embedded code optimizations through iterative improvement.