Skip to Main Content
Emulab is a large-scale, remotely-accessible network and distributed systems testbed used by over a thousand researchers around the world. In Emulab, users create "experiments" composed of arbitrarily interconnected groups of dedicated machines that are automatically configured according to user specifications. In the last year alone, users have run over 18,000 such experiments, expecting consistent and correct behavior in the face of the ever-evolving 500,000 line code base and 3,000 discrete hardware components that comprise Emulab. We have found normal testing to be insufficient to meet these expectations and have therefore provided continuous, automatic validation. This paper describes Linktest, an integral part of our validation framework that is responsible for end-to-end validation during the configuration of every experiment. Developing and deploying such a validation approach faces numerous challenges, including the need for a code path entirely independent of the rest of the Emulab software. We describe our system's motivation, its design and implementation and our experience.