Learning mechanisms that operate in unknown environments should be able to efficiently deal with the problem of controlling unknown dynamical systems. Many approaches that deal with such a problem face the so-called exploitation-exploration dilemma where the controller has to sacrifice efficient performance for the sake of learning “better” control strategies than the ones already known: during the exploration period, poor or even unstable closed-loop system performance may be exhibited. In this paper, we show that, in the case where the control goal is to stabilize an unknown dynamical system by means of state feedback, exploitation and exploration can be concurrently performed without the need of sacrificing efficiency. This is made possible through an appropriate combination of recent results developed by the author in the areas of adaptive control and adaptive optimization and a new result on the convex construction of control Lyapunov functions for nonlinear systems. The resulting scheme guarantees arbitrarily good performance in the regions where the system is controllable. Theoretical analysis as well as simulation results on a particularly challenging control problem verify such a claim.