The world's largest scientific instrument, the Large Hadron Collider (LHC) is currently being assembled near Geneva, Switzerland. When operational, several petabytes of data is to be generated every year for a period of at least ten years. These data is acquired at rates up to nearly 2GB/s and is analysed by thousands of physicists worldwide. In order to exploit the full discovery potential of the LHC, a worldwide grid is currently being deployed. As part of the commissioning of this grid, a series of service challenges is currently being conducted, ramping up the service progressively. These challenges address not only the need to distribute data reliably between many sites around the world - not in burst mode but 24x7 for essentially all of the production lifetime of the machine, but also and much more importantly meet the needs of the experiments for all of their offline data processing.