Cart (Loading....) | Create Account
Close category search window
 

IEEE Quick Preview
  • Abstract

SECTION I

POWER ENGINEERING IN THE “FIRST CENTURY”

The electrification of the World was named as the most significant engineering achievement of the 20th century by the National Academy of Engineering in 2000 [1]. This process could be viewed as an extension of the industrial revolution in which successively higher levels of energy have been controlled and rendered to alleviate humankind of its burdens. Viewed as a vehicle for the extension of the capabilities of a human being, electric energy was widely applied in mining and industry; and later it was applied in lighting and control of the environment. As an illustration of the connection with productivity, Fig. 1 shows a pictorial of the relationship between national productivity in the United States as captured for 1950–2010 depicted as electric energy production versus gross domestic product (GDP). Over 260 000 km of high-voltage transmission are in operation in North America. The main engineering advances in the first century of electrification include: development of practical electric energy conversion devices; advances in high-voltage engineering; and the development of synchronously connected electric networks.

Figure 1
Fig. 1. The annual electric energy generation as related to the corresponding GDP in the United States, 1950–2010 [2], [3].

In 1900–1999, major engineering decisions have been made including: the concept of central station generation; the use of alternating current (ac) generation, transmission, and distribution at 50 or 60 Hz; and more recently the conversion of the philosophy of “cost to serve” to “power marketing” as the basis of electricity pricing. According to [2], in 2009, approximately 475 quadrillion BTU of energy was produced worldwide with 21% generated in North America and 27% in Asia-Oceania. While the foregoing focuses on the size, design, and operation of bulk energy systems, there have been many significant supportive milestones in power engineering that make the field apart from many other branches of engineering. As an example, the practice of power system protective relaying to sense anomalous conditions and take corrective action to protect the assets of the system is somewhat indigenous to power engineering.

A second highlighted power engineering feature is the analysis of large-scale systems of tens of thousands (and more) of states, and the digital assessment of the viability of the solutions of design and operating strategies. The use of large-scale digital analysis began in the 1940s with electric power flow studies (a steady-state analysis of ac interconnected systems, a nonlinear problem modeled by algebraic equations to model the Kirchhoff laws, and the relationship between power, voltage, and current [4], [5]). These types of studies and their dynamic counterparts have led to:

  • large-scale dynamic studies, e.g., [6] and [7];
  • economic dispatch of large systems, e.g., [8];
  • computationally complex applications including optimal commitment of generating units (i.e., when to “turn on” generators over long-time horizons), e.g., [9].

In each of these areas, the norms of reliability, the size of interconnected ac systems, and the economic importance of the studies have had broader impacts on other fields of engineering and beyond including topics in parallel processing, sparsity programming, probabilistic analysis, data communication, and high-voltage engineering.

In this paper, the present status of power systems is discussed with a special focus on the sensitivities of a 2012 point of view. Trends are discussed, and this paper is intended to be a companion to a second paper on the future of electric power engineering.

SECTION II

ELECTRIC ENERGY RESOURCES AND SUSTAINABILITY

A. Major Energy Resources

Unlike the transportation sector where 94% of the primary energy source is oil (petroleum), the electric power sector has a much broader diversity of energy mix and generation technologies. Due to this diversity, the electric power sector is not as susceptible to price spikes and political pressure as the transportation industry. The main primary energy resources in the electric power sector are coal, natural gas, nuclear, hydropower, and to a smaller extent, other renewable resources dominated presently by wind energy. The energy mix for the roughly 4000 billion kWh of electric energy generation for the United States in 2010 is about 45% coal, 24% natural gas, 20% nuclear, 6% hydro, 1% petroleum, with the remainder as mostly other renewables [10].

Coal: Coal is currently the least expensive generation option. Though less efficient than other technologies, it has the least cost per kilowatt hour produced. The electric power sector is by far the dominant consumer of coal (93%). Coal supplies are also plentiful, and hence, the price volatility is much lower compared to, for example, natural gas. The United States, which has the largest reserves of coal, is estimated to have enough coal reserves to last for more than 200 years at the current consumption rate [11]. The major problem associated with coal-based generation is the high levels of emission of carbon dioxide, and other gases including sulfur and nitrogen oxides. Greenhouse gases (GHGs) dominated by CO2 have been linked to global warming and other adverse environmental impacts. With imminent policies that try to limit GHG emission, the share of coal in the energy mix for electricity is expected to trend progressively lower.

Natural gas: Natural gas, which is mostly methane, contributed to about 25% of the total energy used in the United States in 2010 across all use sectors. The electric power sector is one of the major consumers of natural gas in the United States accounting for about 31% of the total use. Natural-gas-based power generation is more efficient and cleaner compared to coal. The CO 2 emitted per unit of heat produced for natural gas is roughly one half of that corresponding to coal. With growing environmental concerns, the use of natural gas for power generation has seen a dramatic increase in the last 20 years, as illustrated in Fig. 2, which shows the net generation capacity changes for the various resource types (1989–2009) [12]. Natural-gas-based generation also dominates the proposed capacity addition, and the U.S. Energy Information Administration [13] projects 62% (or 130 GW) of new capacity additions by 2035.

Figure 2
Fig. 2. Net generation capacity changes by source between 1989 and 2009 in the United States [12].

Nuclear: The first U.S. nuclear plant was constructed in 1957, and in the 1970s and 1980s, nuclear power experienced rapid growth. Since 1990, the share of nuclear power in electric generation has remained fairly constant at around 20%. For technical and economic reasons, the nuclear plants are operated at high capacity factors, exceeding 90%, and these plants primarily support the base load. Nuclear power does not contribute to CO2 emission. However, the radioactive waste generated and the possibility of catastrophic accidents are major environmental and political concerns. Incentives provided by the Energy Policy Act of 2005 and growing global warming awareness led to renewed interest in constructing new nuclear plants. The Fukushima Daiichi incident on March 11, 2011 [14], the result of a near record 9.0 magnitude earthquake and 14-m tidal wave, could retard nuclear energy development.

Renewable resources: Renewable resources contributed about 10% of the total electric generation in 2010, much of it coming from hydroelectric power, accounting for more than 60%, followed by wind at around 20%. The U.S. Energy Information Agency (EIA) predicts that renewable resources will be the fastest growing generation resource reaching 17% of the total energy in the United States by 2035. Wind energy in particular has seen explosive growth in the past few years and continues to grow at a rapid pace. In 2009 alone 10 GW of new wind power capacity was added in the United States, which represents 39% of all new capacity added in 2009 [15]. Fig. 3 which superimposes the actual installed wind generation with the deployment path laid out by [16] to realize the vision of 20% wind by 2030 shows dramatically that the actual growth in the last four years and the projected growth in 2010–2012 far exceed the deployment plan. With renewable energy tax credits, and in select high wind areas, wind energy can be cost competitive with conventional resources. There are already areas in the United States and worldwide where wind penetration is substantial—for example, Iowa has 19.7% of its total generation derived from wind resources [17].

Figure 3
Fig. 3. Actual wind installation versus deployment path required for realizing 20% wind by 2030 [15].

B. Sustainability, Renewable Energy, and Flexible Demand

Renewable energy resources are key to sustainability. Many states have adopted renewable portfolio standards (RPSs), some of which are very aggressive. The state of California has stated that 33% of electricity production should come from renewable resources by 2020. More aggressive goals are to be adopted for future years. The major downside of renewable energy resources, such as wind and solar, is that they are variable. The uncontrollable nature of these resources poses a significant strain on grid operations. The current procedure to manage variable resources is to increase reserve requirements. While this approach may work for contemporary lower levels of intermittent resources, such an approach will not be, by itself, sufficient in future years with much higher levels of variable resources. Even with today's modest levels, there is spillage; even though the resource is capable of producing energy, it cannot be accepted by the operator due to grid limitations or reliability.

Either there will have to be more investments in traditional generation in order to balance out this variability, which will further increase the costs to achieve high renewable energy penetration levels, or society will have to undergo a transformational shift as to how we consume electricity. The role of flexible demand is pivotal to our ability to reach aggressive RPS goals.

Demand for electric energy is traditionally assumed to be very close to perfectly inelastic and many believe that this is unlikely to change. Contemporary patterns of electric energy consumption are a byproduct of infrastructural fixed rates as well as the low cost of energy. These consumption patterns are not sustainable and there will need to be changes or breakthroughs in technology that substantially reduce the cost of renewable energy resources as well as provide a cost-effective way to manage these intermittent resources. Interestingly, generation operational flexibility appears to be improved by inclusion of renewables in the plant portfolio mix.

Real-time pricing has the opportunity to incentivize consumers to change their consumption habits. Consumers are unlikely to keep track of real-time prices continually, but smart devices will allow consumers to represent their consumption preferences. Smart devices along with real-time pricing could enable a transformational change in electricity consumption while not making it an overly burdensome process. Smart devices could be programmed to trigger the device to consume electricity when the price drops to a specified level, to consume electricity when there is a large level of renewable energy production forecasted, or they could even be programmed to be capable of communicating with a third party that centrally manages a large aggregate of consumption. In this last situation, the consumer could specify certain restrictions, such as a time by which the task must be completed.

SECTION III

EVOLUTION OF THE TRANSMISSION/DISTRIBUTION GRID

The trend for ac transmission voltages in the World has been as shown in Fig. 4. The earliest applications in the 1880s brought doubts as to whether long-distance transmission could be used to synchronize large synchronous generators. But in Europe and North America, the fear proved largely unfounded with overhead transmission voltages reaching 200 kV and beyond by 1929. Direct current (dc) transmission (both submarine and overhead) reached similar levels by the 1980s. There is a potential tradeoff between the advantages of bulk energy transmission using 1000-kV class technologies, and the impact on a synchronously operating system at the loss of a very high capacity transmission circuit (e.g., exceeding 1 GW). Also, the cost to benefit analysis of ultrahigh-voltage technologies and examination of Fig. 4 seem to imply that a practical engineering limit near Formula$\sim$1.2 MV may be reached. Currently, North America is served by four large synchronous ac grids with relatively low capacity dc (asynchronous) ties between these grids: the Western Interconnection, the Eastern Interconnection, the Quebec interconnection, and the Texas Interconnection. In Europe, the majority of the continent is connected synchronously at 50 Hz.

Figure 4
Fig. 4. Trend of transmission voltages worldwide. Voltages greater than 1000 kV (ac) are termed ultrahigh ac voltages, and greater than ±800-kV dc are ultrahigh dc voltages.

In the 2010s, there has been a rediscussion of the optimal size of an ac interconnection. Because dynamic performance is an issue in ac systems, some believe that there may be value in disconnecting large interconnected systems—at least during potentially emergency conditions [18]. The implementation of unrestricted power marketing and the high cost of ac/dc/ac back/back interties between asynchronous networks have motivated the continuation of the present practice of operating large-scale synchronous networks.

High-voltage dc (HVDC) transmission deserves special attention: this technology was promoted in the 1930s in Europe by ASEA. The advantage of dc is best seen in long-distance overhead circuits and also in submarine and underground circuits (where cable capacitance can be an issue at high transmission voltages). The technology was proven in the 1950s in Russia, and has progressed to very large long-distance projects. In 2012, an HVDC overhead circuit in Brazil is expected to be in the 7-GW range and extend over 2500 km. Innovative designs in superconducting cables have also been implemented. From a power systems engineering point of view, the main challenges have been in loop flow management; mitigating intertie oscillations; avoidance of blackouts; and design of grid interconnections.

Considerable attention has been given to transmission engineering technologies in the first century of electrification, but in more recent years, there has been a recognition of the level of investment in the distribution power system. Generally, the transmission network energizes several subtransmission networks in the 69–138-kV class. These subtransmission systems energize many substations at which voltage conversion down to distribution voltages is made. Distribution voltages have evolved from a few kilovolts to about 15 kV in the 1970s, and many of these 15-kV circuits are now being considered for upgrade to 37.5 kV. Heydt [19] cites some contemporary areas of interest in power distribution engineering:

  • utilization of electronic controls;
  • integration of distributed generation resources, especially solar and wind resources;
  • move away from traditional radial distribution architecture to networked primary systems, and also networked secondary systems;
  • integration of new loads, particularly electric vehicle loads;
  • development of new pricing infrastructures to promote sustainable technologies and resource integration;
  • use of new materials including insulation and dielectrics;
  • continuing and expanding efforts in demand side management;
  • improvement and management of reliability.
SECTION IV

POWER SYSTEM OPERATIONS AND CONTROL

Innovations in automation associated with power system operations and control have resulted in the most significant advancements and enhancements to the electric grid in the past 30 years. With the advent of commercial main frame digital computers the era of major advances in energy management system (EMS) development and automatic control of the bulk power system was initiated.

The guiding principles behind power system operation and control include:

  • balance power generation and demand continuously;
  • balance reactive power supply and demand to maintain scheduled voltages;
  • monitor flows over transmission lines and other facilities to ensure that thermal limits are not exceeded;
  • maintain system stability;
  • operate the system reliably even if a contingency occurs, such as the loss of a key generator or transmission facility (Formula$N-1$ criterion);
  • prepare for emergencies.

In order to adhere to these guiding principles the modern EMS system incorporates the following components: information gathering and processing; decision and control; and system integration.

Fig. 5 depicts the various elements of a modern EMS system.

Figure 5
Fig. 5. Elements of a modern EMS.

The elements of a modern EMS are further detailed in Fig. 6 where the time scales for the various EMS functions are depicted. The various elements are grouped by the grayscale color and the associated background pattern. The various individual elements in Fig. 6 will be discussed in some detail. These elements are primarily highly specialized analytical tools for which significant software and numerical analysis techniques have been developed. These tools have evolved and improved in sophistication and efficiency over the years. Another major factor which has led to significant new developments in EMS tools is the advent of electricity markets in several countries around the world. Several new analytical tools to aid power system operation in the market environment have been developed.

Figure 6
Fig. 6. Time scales associated with various EMS functions.

The guiding principles of power system operation identified earlier can be categorized into two broad categories:

  • real-time generator dispatching operations;
  • real-time transmission operations.

The dispatching operations involve many activities that are conducted in parallel on a continuous basis, 24 hours a day and can be grouped into three overlapping time frames: rescheduling generation resources; scheduling generation resources; and dispatching generation resources. With the advent of market-based operation, many operating entities also serve as the market operator. Thus, the operating entity has the responsibility of conducting auctions to make sure that there are sufficient generation resources for the next day. This is typically done using a day-ahead market in which based on an hourly forecast of the load for the next day auctions are conducted. For each hour in the following day, the market operator obtains buy bids and sell bids and matches them to obtain a clearing price. One approach to doing this is by using a double-sided auction, in which the market operator lines up the lowest sell bid with the highest buy bid and then clears the market when the forecasted load for the hour is met obtaining the clearing price as depicted in Fig. 7. Once the day-ahead market is cleared, any shortfall in resources is covered by purchasing the resources in a spot market. In addition, market tools are also needed for purchasing and delivering ancillary services which include spinning reserve and reactive resources. Significant new developments in EMS software tools related to market operation have taken place in the last five to eight years. Several EMS vendors have market operations tools that are currently being used in operations. Over the years some of the key developments in EMS applications include the following.

Figure 7
Fig. 7. Double-sided auction.

Supervisory Control and Data Acquisition (SCADA): This application serves as a vehicle for control of system devices and runs every 2–4 s. It also provides communications dealing with equipment indications and status and data regarding load, generation and voltages. SCADA originated from a special purpose real-time monolithic processor which further developed to a distributed computing structure and then to the present standardized computer network structure. A SCADA system consists of a master station that communicates with remote terminal units (RTUs) to observe and control physical components in the system. The basic elements of SCADA are sensors which measure the desired quantities. These include current transformers (CTs) and potential transformers (PTs) and a whole new breed of intelligent electronic devices (IEDs), which are capable of measuring a range of quantities associated with component performance. These data are fed to the RTU. The master computer or unit resides at the control center EMS and scans the RTUs for reports. The data are then utilized for a range of analysis functions and for situational awareness. With the advent of global positioning system (GPS) time-synchronized phasor measurement units (PMUs) a new breed of measurements that can be compared across the system provide significant enhancement of situational awareness. Fig. 8 illustrates details of the SCADA system. Not shown in Fig. 8 is a new concept of inclusion of coordinated controls among renewable resources such as wind farms.

Figure 8
Fig. 8. SCADA details.

The automatic generation control (AGC) function involves two primary components: 1) economic dispatch (ED) of the committed units; and 2) load frequency control.

The ED function guarantees that during any dispatch period the load is being met by the most cost-effective mix of generation among the committed units. Tools for performing the ED function have evolved and become increasingly sophisticated. Starting from a simple approach which utilized the concept of equal incremental cost to a Lagrange-multiplier-based constrained optimization with network constraints to a fully implemented ac optimal power flow with options to perform contingency analysis. The ED function requires the units to be committed and this requires the solution of a unit commitment (UC) problem which is a mixed integer optimization. In the early days of EMS development, dynamic programming was utilized. However, the size of the problem implemented was limited due to the curse of dimensionality. A significant enhancement to overcome this problem was the development of a Lagrangian-relaxation-based approach. The most recent enhancement to the solution of the UC problem is the use of mixed integer optimization including security constraints, and accounting for the stochastic nature of wind and other renewable resources.

Turbine-governor controls ensure that the generation in the system balances the load. This function has two principal loops: 1) primary control loop; and 2) secondary control loop. The primary control loop includes the speed governors that sense generator shaft speed and adjusts turbine input based on the error signal. The secondary loop is more complex and based on the concept developed by Nathan Cohn. Here the area control error (ACE) is calculated to develop control signals that will raise or lower outputs of generators in a control area that are participating in system governing.

The state estimator (SE) is another critical element of the EMS system. A redundant set of measurements are used to determine the state of the system (voltage magnitude and voltage phase angle at each node in the system) given the system topology and the network parameters. Significant new developments are being incorporated in state estimation utilizing synchronized PMU-based measurements.

The security analysis function utilizes the SE output to examine if the system is susceptible to contingencies. Static security analysis was first developed. This consisted of first screening a large number of contingencies and then using a power flow to analyze critical contingencies identified by the screening to determine if the postdisturbance operating condition was acceptable. A significant new enhancement to security analysis is the ability to perform online dynamic security assessment to examine both rotor angle and voltage stability.

With the increased penetration of renewable resources and greater reliance on market-based operation, significant new enhancements are taking place in EMS systems. These factors result in greater uncertainty which needs to be characterized. This would require new advances in optimization capabilities to incorporate stochastic optimization-based approaches. The potential of load as a resource and significant penetration of hybrid electric vehicles also raises new complexities that need to be accounted for and incorporated in the various EMS applications. This would have significant bearing on the electricity market related functions. Other factors that would significantly advance EMS systems include the use of flexible ac transmission systems (FACTS) and wide area measurements for control.

The injection of these new complexities together with reliability requirement set by FERC and NERC have added additional requirements on EMS in terms of communication requirements, cybersecurity requirements, ability to deal with uncertainty, and enhanced capabilities for automated direct digital control on the bulk electric power system.

SECTION V

POWER MARKETS

In the late 19th century, the electric industry was competitive; in Chicago alone there were 24 companies and single streets had multiple distribution lines giving customers ample choice [20]. This quickly changed due to economies of scale, which create natural monopolies. This was the primary motivation for vertically integrated utilities. As a result, the Public Utilities Holding Company Act (PUHCA) of 1935 established regulatory requirements for the electric utility sector. However, over the years it was questioned as to whether economies of scale existed as we would otherwise have generators today of much larger size. Furthermore, after seeing other industries successfully deregulate, e.g., the airline industry, there was a growing interest in whether the electric energy sector could also deregulate successfully. Likewise, creating markets was seen as a way to better handle the massive amounts of trading that would occur between utilities. Neighboring utilities would call each other on the phone and discuss energy trading. This process is inefficient as compared to capturing all of these sales through a central auctioneer. Moreover, the arguments for centralized markets were also based on increasing competition, creating an incentive compatible structure to drive efficiency and innovation, and to give consumers choice. As a result, the first concept of electric energy markets occurred in Chile in the 1980s, followed by the privatization of supply in the United Kingdom in 1990. Today, there is still a mix of energy markets and vertically integrated utilities as the debate as to which structure is best carries on.

Electric energy markets are continuing to evolve and grow in complexity. In this section, we review recent changes and proposals for change related to three main categories: settlement policies, proposed market redesigns, and flexible market models. In the category of settlement policies, we discuss the issues of uplift payments and the issue of settlement policies with stochastic optimization problems. In terms of proposed market changes, the reliability/unit commitment problem and demand side bidding are discussed. Also the benefit of developing a more flexible market model is discussed. The electric industry has relied on generation as the controllable asset to balance out the inflexible, uncontrollable, variable load. With many states adopting RPS as well as an international impetus to increase renewable energy penetration, the future generation profile will include substantial levels of intermittent resources. As a result, there is a need to harness flexibility from other resources and assets.

A. Uplift Payments

Uplift payments ensure that a generator at least breaks even over the course of any single day. If a generator's revenue from the energy payments is not sufficient to cover its costs, then the Independent System Operator (ISO) gives them an uplift payment such that their profit for that day is zero. This process is known as ensuring a nonconfiscatory market.

There are a variety of problems with uplift payments. First, consumers must cover the uplift payments; ISOs socialize the uplift payment evenly across all loads. The purpose of using a marginal pricing system is to reflect the actual true cost to deliver another increment of energy to a particular location; uplift payments distort the true price signal. Uplift costs also interfere with load and storage participating in the market. Price-sensitive load could submit a bid, be selected by the market, but end up being charged more than its original bid; there is no guarantee that the locational marginal price (LMP) plus the uplift charge does not surpass the load's bid. It is possible to avoid such situations by implementing a price discrimination policy by charging various consumers different $/MWh uplift charges. However, price discrimination policies are not typically preferred and, furthermore, such a settlement scheme creates perverse incentives for load to not bid their true value of consumption.

Uplifts can also cause problems for storage since their motivation is to exploit price difference between different time periods. Charging uplift payments to storage devices would further decrease their profit margin and it also decreases the incentive for storage to enter the market. At a time when energy storage would substantially benefit operations, uplift payments are, unfortunately, a deterrent to entry.

Uplift costs are also causing problems for virtual bidders; virtual bidders are market participants that are purely financial and try to profit by speculating on the price difference between the day-ahead and the real-time prices. Since uplift payments are socialized across all loads, virtual bidders are charged uplift payments when they buy electricity. Virtual bidders always buy (sell) back what they sold (purchased) since they will not actually produce or consume. As a result, they argue that they should not be charged uplift payments since they never actually consume electric energy. Their main argument is that uplift costs should be charged based on causation. Virtual bidders have been arguing that they do not cause uplift costs and, thus, they should not be charged uplifts. This is debated as at least one ISO has responded by stating that they believe virtual bidders can cause uplift payments just as physical load can cause uplifts.

B. Stochastic Optimization and Pricing

There has been increasing interests to switch from deterministic day-ahead optimization models to stochastic optimization models due to the future levels of intermittent resources. One thing that has yet to be clearly answered, however, is what the appropriate settlement scheme should be when using a stochastic model. With a deterministic model, there is only one state and, thus, there is only one set of LMPs generated. With a stochastic optimization problem, there are many potential realizable states. It is unclear if the LMPs should be based on one of these states, or if we should establish an expected price. Using an expected price may increase uplift payments when the expected value is lower than the actual realized cost to deliver an incremental MW to a particular bus. Likewise, the expected LMP could be much higher than the actual realized cost as well, thereby increasing the load payment.

C. Reliability Unit Commitment

ISOs ensure that there is enough capacity committed to meet the ISO's forecasted demand for the following day by running what is known as a reliability unit commitment (RUC), also known as a residual unit commitment, which is solved after the day-ahead market. There have been recent proposals that day-ahead market models should be changed to incorporate the ISO's forecasted demand. This would be equivalent to combining the RUC and the day-ahead market model into one model, which would improve market efficiency. This is not done today by ISOs because this would make the ISO a market participant, which obviously goes against their purpose as an independent entity. Thus, there is still a debate as to whether there is a way to improve the market efficiency by combining the day-ahead market model with the RUC, without breaking the role the ISO is supposed to have as an independent entity.

D. Market Structure for Demand Side Bidding

Current market models are built around generator characteristics. Generators submit monotonically increasing bid offer curves, startup costs, and no-load costs. Generators also relay information to the ISOs regarding their operating limits. Currently, load may be allowed to be modeled as inflexible or they can submit the same information that we allow generators to submit. However, there has been little focus as to whether such a market model is reflective of load characteristics.

Redesigning the market model to better accommodate unique load characteristics would further encourage demand side participation. A substantial portion of load is deferrable by nature; tasks may not need to be completed immediately but by a predefined time, e.g., recharging EVs. There is also motivation to allow load to participate in capacity markets. Flexible load may be able to receive capacity payments by offering to reduce their consumption. However, this has the problem that the load may create an artificial level of consumption only to create the opportunity to reduce consumption and to receive a capacity payment. PJM has already moved forward with this by allowing demand response to offer demand reduction as a capacity resource in their forward capacity market, the reliability pricing model (RPM).

E. Modeling of Soft Constraints

ISOs have recently begun to model certain constraints as soft constraints as opposed to hard constraints. Hard constraints cannot be violated no matter the cost impact they create on the optimization problem. Soft constraints, on the other hand, can be violated for a set price. If the dual variable, i.e., shadow price, of the constraint is higher than this set price, then the constraint can be violated. California Independent System Operator (CAISO) refers to these prices in [21] as “uneconomic adjustment parameter values.”

ISOs have also begun relaxing the lower bounds on generators' production level. While many generators have a physical restriction on their minimum output level, generators often bid a different minimum output level referred to as the “eco-min” as this minimum run level may instead be guided by economic operations. Such modeling approaches have been already adopted by the industry. A more appropriate cost function for generators would allow the unit to operate below its eco-min but at a higher price. For instance, the price to operate below the eco-min should be higher since this can result in less efficient heat-rates and/or greater emissions.

Flowgate pricing is one method used by ISOs; it is defined as allowing a transmission line flow to exceed its steady-state rated capacity for a set price. In the section labeled Integrated Forward Market (IFM) Parameter Values [21], CAISO states, “in the scheduling run, the market optimization enforces transmission constraints up to a point where the cost of enforcement (the shadow price of the constraint) reaches the parameter value, at which point the constraint is relaxed.” CAISO uses a price of $5000/MWh in both its IFM and in its real-time market; for its residual unit commitment, it uses a price of $1250/MWh [21]. These prices place a limit on the dual variable (shadow price) of the line capacity constraint. In 2011, these prices were arbitrarily established: CAISO uses a price of $5000/MWh, NYISO uses a price of $4000/MWh and SPP uses a price of $2000/MWh.

F. Transmission Switching

Transmission switching is predicted to play a larger role in markets. Transmission switching has been proposed as a corrective switching action for many years [22], [23] and it has been considered for cost reduction purposes [24]. The Sacramento Valley recently experienced heavy congestion; at first, CAISO relaxed the thermal constraint on one of their main 115-kV lines, as described in Section V-E; they later determined that a transmission switching action would also correct this congestion problem. The ISO was able to later identify a remedy of transmission circuit switching to relieve this congestion [25]. One potential downside to transmission switching is that it undermines an assumption that ISOs rely on to maintain revenue adequacy in the Financial Transmission Rights (FTRs), also known as Congestion Revenue Rights (CRRs), markets: that the network topology does not change. While transmission switching provides multiple avenues to improve system operations, it may cause revenue inadequacy in FTR markets.

SECTION VI

THE ROLE OF POWER ELECTRONICS IN POWER SYSTEMS

Another major advance in electric power systems has been the use of power electronic devices and converters for control and protection. Since the use of thyristors in HVDC systems in the 1970s, the role of power electronics has grown significantly to encompass all aspects of power systems including generation (distributed), transmission, distribution, and end use. Power electronics is also a key enabling technology for the emerging smart grid. Coupled with advances in power semiconductor devices and other core power electronic technologies, and with availability of extensive sensor and communication networks, the role of power electronics is expected to grow significantly in the future.

A. HVDC and HVDC Light

Thyristor-based HVDC transmission systems offer several advantages over ac systems for long-distance and underground/submarine transmission including improved system stability, lower losses, and elimination of capacitive currents. HVDC systems are required for interconnecting incompatible ac networks such as those that operate with different frequencies or are unsynchronized. Typical HVDC systems have two converter stations (rectifier or inverter depending on power flow direction) connected by a long transmission line with the dc voltage levels reaching as high as ±800 kV. A 12-pulse converter composed of two line-commutated, three-phase thyristor bridge converters in series, one supplied by the wye secondary and the other by the delta secondary offers a good compromise between complexity and harmonic reduction, and hence is used widely [26]. HVDC systems are gaining renewed interest as part of extra high-voltage green power superhighways proposed to transmit large-scale wind and solar energy. A recent innovation in HVDC field has been the insulated gate bipolar transistor (IGBT)-based HVDC light technology. HVDC light consists of pulse width modulated (PWM) voltage source converters (VSC) switching at a few kilohertz and offers superior controllability of active and reactive power flow, significantly lower filter requirement and flexibility such as multiterminal HVDC systems and dc grids. HVDC Light is proposed for integrating offshore wind energy and underground distribution such as city center in-feed [27].

B. Flexible AC Transmission Systems

FACTS is a collective term for different types of power electronic devices/converters-based systems that are capable of controlling the power flow in high-voltage ac transmission systems [28], [29]. FACTS devices are capable of the following major functions:

  • precise control of power flow along desired transmission corridors, and reduction of loop flows;
  • improving transient, small-signal, and voltage stability;
  • increasing capacity in existing transmission infrastructure.

FACTS devices control parameters of the transmission system for power flow control. The active power flow between two buses for a lossless lumped parameter line is given by Formula TeX Source $$P={V_{1}V_{2}\over X}\sin(\delta)\eqno{\hbox{(1)}}$$ where Formula$V_{1}$ and Formula$V_{2}$ are the magnitudes of the voltages at the two buses, respectively, Formula$\delta$ is the phase angle between the two bus voltages, and Formula$X$ is the series line reactance. FACTS devices control one or more of these three parameters to control power flow and improve stability.

The earlier FACTS devices were predominantly thyristor based, like the thyristor controlled series capacitor (TCSC) and static VAR compensator (SVC). TCSC is a series connected FACTS device that controls the effective impedance of the transmission line by suitably varying the firing angle of the two back-to-back thyristors. TCSC has been used to mitigate subsynchronous resonance in series capacitor compensated transmission [30].

With advances in power semiconductor devices and circuit topologies, the use of PWM-converter-based FACTS devices has seen a significant increase. These newer FACTS devices are mostly based on VSC implemented using fully controllable devices capable of switching at a few kilohertz [28], [29]. Within their voltage and current ratings, the VSC-based FACTS devices are capable of injecting any suitable, controlled voltages or controlled currents at the line frequency. These FACTS devices offer improved speed of response and extended control range independent of the line operating conditions. The static synchronous series compensator (SSSC) is a PWM converter that can inject controlled voltages in series with a line and in phase quadrature with the line current to control the phase angle or the line impedance. The static compensator (STATCOM) is a shunt connected PWM converter that can inject controlled leading or lagging reactive current to control the bus voltage magnitudes. The versatile unified power flow controller (UPFC) is topologically a combination of SSSC and STATCOM. As shown in Fig. 9, it has a series PWM converter and a shunt PWM converter sharing a common dc link.

Figure 9
Fig. 9. Block diagram of a UPFC.

Unlike SSSC, the UPFC can inject series voltages at any arbitrary phase relationship with the line current with the required active power processed by the shunt converter. With this capability the UPFC can effectively control the power flow over a wide range at different operating conditions.

FACTS devices have the ability to enhance both transient and small-signal stability of power system networks. The transient stability is affected by the electrical power output during the fault and immediately after fault clearance. Since power flow can be controlled continuously using FACTS devices by dynamically changing the Formula$P{-}\delta$ characteristics of the system, power during and after a fault is controlled to improve the stability margin of the system.

C. Distribution System Applications

A main application of power electronics in distribution systems is related to power quality. The digital age loads, for example, processing plants in semiconductor industry, and data centers are highly sensitive to even momentary power quality problems such as voltage sags, harmonics, phase unbalance, and flicker in supply voltage. Short duration voltage sags are the predominant power quality events, with estimated revenue lost per event of more than $0.5 M for semiconductor plants [31]. The dynamic voltage restorer (DVR) is a series device that injects a controlled voltage to compensate for voltage sags and other disturbances to protect sensitive equipment. The distribution static compensator (DSTATCOM) is a shunt device that injects controlled currents to compensate for power quality problems in the load current, such that the compensated load draws balanced, sinusoidal, unity power factor current. The unified power quality controller (UPQC), which has the same structure as the UPFC shown in Fig. 9, combines the features of a DVR and a DSTATCOM. It can inject current in shunt and voltage in series simultaneously thus correcting for grid voltage sags as well as the distortion in load currents [32]. All of these devices consist of voltage source converters implemented using IGBT switches, which operate at frequencies in the range of tens of kilohertz. They have fairly high control bandwidth and can respond to voltage disturbances in a small fraction of the line frequency cycle.

D. Grid Integration of Wind and Solar Energy

Perhaps the fastest growing application of power electronics in power systems is the grid integration of wind energy and solar photovoltaic (PV). It is projected that the capacity of wind generators in the United States can be as high as 300 GW for a capacity penetration of 20% by 2030 [16]. Large wind farms consist of several multiple megawatt wind turbines that are interconnected with the utility grid through medium voltage collector network. Early designs employed fixed speed, squirrel-cage-type induction generators. However, doubly fed induction generators (DFIGs) with a wound rotor and dual PWM converters are currently the dominant wind generator technology [33]. The main advantage of the DFIG-based generation is that it allows higher energy extraction from wind at varying wind speeds, with the power converter rated only for about 25% of the total power. The stator winding is connected directly to the utility grid while the rotor is supplied with controlled, variable frequency currents by the PWM converter, as shown in Fig. 10. By controlling the rotor currents the machine speed can be controlled from subsynchronous to supersynchronous speeds as required for maximum power tracking. Another advantage of DFIG is that the system can generate or absorb reactive power, thus eliminating the need for capacitor banks to provide reactive power support. Wind generators based on permanent magnet or conventional synchronous machines with full power processing power converters are also gaining interest and some market share due to their enhanced controllability, better voltage ride through capability and ability to implement various grid support features under a wide range of operating conditions.

Figure 10
Fig. 10. DFIG-based wind generator.

Though significantly smaller compared to wind in terms of existing capacity, solar-PV-based generation, both distributed and utility scale, is undergoing dramatic growth buoyed by policy initiatives and research advances in bringing down cost of solar generators. PV systems range from a few kilowatt rooftop applications to utility scale solar power plants rated as high as 100 MW and growing. The basic functions of the power electronic inverter employed in all PV systems include converting dc power from PV panels to grid quality ac power, usually injected as controlled current at unity power factor, maximum power tracking, galvanic isolation, grid synchronization, and compliance with relevant grid interconnection standards. The characteristics of the wind and solar power plants are completely characterized by the control design of the interfacing power electronic converters. The newer power converters provide limited grid support features such as low-voltage ride through and voltage regulation capabilities.

E. Power Electronics in Emerging Smart Grid

The emphasis on smart grid can dramatically increase the role of power electronics. The vision of smart grid includes integrating large-scale renewable energy resources and storage, resilience to attacks and self-healing from disturbances, new levels of power quality, enabling user interaction, and plug and play of smart energy devices. In order to achieve high penetration of wind and solar resources, the renewable generators need to be capable of providing dynamic grid support on par with that of conventional generators. Power converters interfacing these resources will provide primary frequency support through supplemental control loops [34] and controlled ramp rates utilizing transient scale storage. They will be capable of providing reactive power support and voltage regulation. In addition to remaining connected during grid faults, the power converters will also be able to support programmable fault response in support of grid fault recovery. High penetration of renewable resources requires significant new transmission and delivery infrastructure, and also precise control of power flow in these lines to maximize the use of renewable energy. Widely distributed FACTS devices at relatively lower power levels (perhaps a device per transmission line [35]) and dynamically reconfigurable designs will enable such precise power flow control.

The distribution system of the smart grid is often portrayed as an energy internet, providing similar levels of user interaction as the information internet, plug and play of smart devices, and bidirectional energy exchange. Solid state transformers (SST) that replace conventional 60-Hz distribution transformers with multistage power converters and a high-frequency transformer will support optimizing energy flow with multiple objectives, enhance performance of distribution systems, enable effective integration of distributed microresources, and provide high levels of power quality and reliability. Other power-electronics-based plug and play devices under development include distributed energy storage devices, universal energy management systems with advanced demand response features, microgrid ready renewable energy interfaces, and intelligent fault management systems. Power electronic controllers will also support emerging distribution system architectures such as microgrids [36], neighborhood power systems, and dc distribution systems. These new systems can operate in grid islanded mode, with the power converters maintaining the frequency and voltage integrity. Solid state breakers, solid state transfer switches, and solid state current limiters based on high voltage, wide band gap devices will be utilized for fast protection and network reconfiguration required for smart distribution systems.

Electrification of transportation is another area of growing application for power electronics. Coupled with real-time communication with grid, smart charging of plug-in vehicles through high performance power converters will help ensure demand balancing and power quality, and enable high penetration of electric vehicles without adverse effect on the grid. In addition, power electronic control will also enable advanced grid support features such as vehicle-to-home (V2H) and vehicle-to-grid (V2G) power flow, and use of electric vehicles as distributed storage in support of renewable resources.

SECTION VII

POWER ENGINEERING EDUCATION

In the early years of the previous century, electrical engineering education was dominated by power engineering topics in generation, transmission, and utilization. These topics have migrated to a more system-based philosophy relating to the application of system engineering and computer engineering to power system design and operation. As an example, the adoption of smart grid philosophies has come to mean the use of information to design and operate the power system. The essence of automation is at the heart of the smart grid: various decisions in operation may no longer be relegated to operators’ action. Instead, operating decisions considering a wide range of multiobjectives might be “calculated” digitally and implemented automatically and directly. While safety, redundancy, and reliability considerations are clearly issues, as this high level of direct digital control is implemented, it is believed to be possible to realize the objectives of the smart grid. To this end, the analogy between Internet opportunities and smart grid needs translates into a new philosophy in power engineering education: develop the cognitive and cyber skills while focusing on domains of specific expertise.

The transition to smart grid objectives in power engineering has implications in power engineering education. In contemporary power engineering education, there is a focus on:

  • the use of digital technologies for processing sensory signals, utilizing those signals for control, and the use of large-scale digital technologies for system design;
  • renewed interest in energy conversion, especially connected with renewable energy technologies such as wind and solar energy;
  • the integration of economic principles with traditional power engineering topics to coalesce subjects in power marketing and economic operation;
  • use of the mathematics of reliability to capture phenomena of component failure, unavailability of renewable resources, large-scale system operation;
  • optimization, including multiobjective optimization.

Contemporary trends in power engineering education have been to encourage students to solve complex, multiobjective, multidisciplinary problems at the masters level of sophistication. In the 1980s, power engineering graduate enrollments began to drop and by 2000, there has been expressed concern that there would be a paucity of qualified power engineers [37]. It is expected that 3000–4000 power engineers will be needed in the United States per year by 2013. The production of masters level and doctoral level students in the United States is about 250 and 100 per year, respectively. Bachelor level production of students is about 1000 per year. The recent two years have seen a marked interest in power engineering as measured by student enrollments.

SECTION VIII

INSIGHT INTO THE FUTURE

The power system operation and control function will face a number of exciting challenges in the future. The development of power systems from relatively simple structures to a complex interconnected network with many “information age” features will no doubt evolve into further developments. Examples include the following.

  • Development and commercialization of large-scale electric energy storage: There is a need for large-scale storage which is economical and efficient. If large-scale storage becomes feasible it will significantly alter the playing field and have wide ranging impact on power system operation and control. The most significant impact would be with regard to generation and demand balancing function. Several interesting challenges will arise in determining the optimal location and sizing of storage, charging and discharging storage, and incorporating storage costs in market operation. In addition, the EMS functions outlined in this paper would have to be enhanced to account for the large-scale storage because the flexibility allowed by storage will significantly affect scheduling, balancing, market operation, frequency control, and response to contingencies.
  • Load as a resource: The flexibility to control load in addition to generation adds a significant new dimension to the power system operation and control problem. This is not a new subject because even today certain large customers do sign interruptible contracts in times of emergency. With the advent of the smart grid and two-way communication between the supplier and the customer, demand side management could become pervasive and require significant new advances in EMS tools.
  • Increased penetration of variable renewable resources: Wind and solar renewable resources are highly variable and could cause significant imbalances between generation and load. As a result new enhancements to power system and operation and control tools are needed. The availability of large-scale energy storage and demand response will directly aid in alleviating the detrimental effects of the variability. However, the EMS tools would have to account for the variability and the concomitant availability of storage or load as a resource to ensure viable market operation and guarantee system reliability.
  • Direct digital control: Direct control of the power system without operator intervention would also result in significant changes to the power system operation and control philosophy and tools. This however would require major enhancements in terms of both hardware and software developments to guarantee reliability and redundancy.
  • Wide area measurements using synchrophasors: There has been a considerable investment in the use of synchrophasors (phasor measurement units based on the GPS for timing). While the applications to date are mainly in enhancing situational awareness, there are many suggestions for wide area controls based on this technology.
  • Electric vehicles: The expanded use of electricity for transportation is another factor which could have far reaching impact on power system operation and control. Issues regarding the load on the system and the capability of the distribution system and the subtransmission system to handle the new load and customer choice with regard to time of use will add new complexities not seen before.
SECTION IX

CONCLUSION

The main conclusion of this paper is that power engineering has been successful in electrifying industry, common living, and communications/information processing worldwide in its first century. The main power engineering advances in the first century of electrification include: development of practical electric energy conversion devices; advances in high-voltage engineering; and the development of synchronously connected electric networks.

The future of power engineering includes challenges of fundamental energy sources, and in the technological development of electronic energy controls. In particular, the following areas are identified: development of viable large-scale energy storage; effective control of loads; large-scale development of renewable resources to attain some measure of sustainability worldwide; direct digital control of bulk energy and power systems; implementation of electric vehicles in a cost and environmentally effective way; and meeting these challenges with commensurate effective educational curricula.

Footnotes

This work was supported by the Power Systems Engineering Research Center (PSerc) and a National Science Foundation and industry supported Industry/University Cooperative Research Center under Grants NSF EEC-0001880 and EEC-0968993. The work of R. Ayyanar and G. T. Heydt was supported by the Future Renewable Electric Energy Distribution Management (FREEDM) Center and a National Science Foundation supported Engineering Research Center under Grant NSF EEC-08212121.

The authors are with the School of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe, AZ 85287 USA (e-mail: heydt@asu.edu; rayannar@asu.edu; kory.hedman@asu.edu; vijay.vittal@asu.edu).

References

No Data Available

Authors

Gerald Thomas Heydt

Gerald Thomas Heydt

Gerald Thomas Heydt (Life Fellow, IEEE) is from Las Vegas, NV. He received the Ph.D. degree in electrical engineering from Purdue University, West Lafayette, IN.

His industrial experience is with the Commonwealth Edison Company, and E. G. & G. He is a member of the National Academy of Engineering. He is currently the site director of a power engineering center program, the Power Systems Engineering Research Center, Arizona State University, Tempe, where he is a Regents’ Professor.

Dr. Heydt is the recipient of the 2010 R. H. Kaufmann Award from IEEE.

Rajapandian Ayyanar

Rajapandian Ayyanar

Rajapandian Ayyanar (Senior Member, IEEE) received the M.S. degree from the Indian Institute of Science, Bangalore, India and the Ph.D. degree from the University of Minnesota, Minneapolis.

Currently, he is an Associate Professor at the Arizona State University, Tempe. His current research interests include topologies and control methods for switch mode power converters, fully modular power system architecture, new pulse width modulated (PWM) techniques, design of power conversion systems and distribution systems for large-scale, distributed integration of renewable energy resources—mainly solar PV and wind, and power electronics applications in enabling “smart grid.”

Dr. Ayyanar received an Office of Naval Research (ONR) Young Investigator Award in 2005. He serves as an Associate Editor for the IEEE TRANSACTIONS ON POWER ELECTRONICS (Letters).

Kory W. Hedman

Kory W. Hedman

Kory W. Hedman (Member, IEEE) received the B.S. degrees in both electrical engineering and economics from the University of Washington, Seattle, M.S. degrees in both electrical engineering and economics from Iowa State University, Ames, and the Ph.D. and M.S. degrees in industrial engineering and operations research from the University of California at Berkeley, Berkeley.

He is an Assistant Professor in the School of Electrical, Computer, and Energy Engineering and a graduate faculty in the Department of Industrial Engineering, Arizona State University, Tempe. He has previously worked for the California ISO and for the Federal Energy Regulatory Commission.

Vijay Vittal

Vijay Vittal

Vijay Vittal (Fellow, IEEE) received the B.E. degree in electrical engineering from the B.M.S. College of Engineering, Bangalore, India, in 1977, the M.Tech. degree from the Indian Institute of Technology, Kanpur, India, in 1979, and the Ph.D. degree from Iowa State University, Ames, in 1982.

He is the Ira A. Fulton Chair Professor in the Department of Electrical, Computer, and Energy Engineering, Arizona State University, Tempe. Currently, he is the Director of the Power System Engineering Research Center (PSERC).

Dr. Vittal is a member of the National Academy of Engineering.

Cited By

No Data Available

Keywords

Corrections

None

Multimedia

No Data Available
This paper appears in:
No Data Available
Issue Date:
No Data Available
On page(s):
No Data Available
ISSN:
None
INSPEC Accession Number:
None
Digital Object Identifier:
None
Date of Current Version:
No Data Available
Date of Original Publication:
No Data Available

Text Size


Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.