We describe an evolutionary approach to one of the most challenging problems in computer music: modeling how skilled musicians manipulate sound properties such as timing and amplitude in order to express their view of the emotional content of musical pieces. Starting with a collection of audio recordings of real performances, we apply a sequential-covering genetic algorithm in order to obtain computational models for different aspects of expressive performance. We use these models to automatically synthesize performances with the timing and energy expressiveness that characterizes the music generated by a professional musician. The reported results indicate that evolutionary computation is an appropriate technique for solving the problem considered. Specifically, our evolutionary algorithm provides a number of potential advantages over other supervised learning algorithms, such as a method for non-deterministically obtaining models capturing different possible interpretations of a musical piece.