Abstract:
Automating physical database design has remained a long-term interest in database research due to substantial performance gains afforded by optimised structures. Despite ...Show MoreMetadata
Abstract:
Automating physical database design has remained a long-term interest in database research due to substantial performance gains afforded by optimised structures. Despite significant progress, a majority of today’s commercial solutions are highly manual, requiring offline invocation by database administrators (DBAs) who are expected to identify and supply representative training workloads. Even the latest advancements like query stores provide only limited support for dynamic environments. This status quo is untenable: identifying representative static workloads is no longer realistic; and physical design tools remain susceptible to the query optimiser’s cost misestimates.We propose a self-driving approach to online index selection that eschews the DBA and query optimiser, and instead learns the benefits of viable structures through strategic exploration and direct performance observation. We view the problem as one of sequential decision making under uncertainty, specifically within the bandit learning setting. Multi-armed bandits balance exploration and exploitation to provably guarantee average performance that converges to policies that are optimal with perfect hindsight. Our simplified bandit framework outperforms deep reinforcement learning (RL) in terms of convergence speed and performance volatility. Comprehensive empirical results demonstrate up to 75% speed-up on shifting and ad-hoc workloads and 28% speed-up on static workloads compared against a state-of-the-art commercial tuning tool and up to 58% speed-up against the deep RL alternatives.
Date of Conference: 19-22 April 2021
Date Added to IEEE Xplore: 22 June 2021
ISBN Information: