Skip to Main Content
There have been attempts, using various approaches, to assess the additional cost of running an electricity system when intermittent renewable generation is used to provide a significant proportion of the energy. The key issues are the difference, in statistical terms, between the resource availability of the intermittent source and conventional generation and the contribution the intermittent source can make to meet the system peak demand while maintaining system reliability. There is considerable agreement over the capacity credits that can be attributed to renewable energy sources, that is the amount of conventional capacity that renewables can reliably displace, yet the implications for costs have proved more controversial. Approaches to calculate changes in overall system cost are examined and an expression for the additional cost that intermittent generation imposes on a system that is attributable to its intermittent nature is identified. Further, it is shown that this expression can be reconciled with approaches that look at intermittent renewables on a stand-alone basis and factor in the additional costs of 'standby' capacity. It is shown that the main source of divergence between estimates of the cost of intermittency is the load factor implicitly assumed for the conventional plant used as a reference. There is only one consistent way to impute the costs of intermittency when the unit cost of intermittent plant is being compared with that of baseload generation plant.