Assessing the Value of Proactive Microgrid Scheduling

Microgrids and multi-microgrids are commonly installed to fulfill rising flexibility needs and to boost the system resilience by advanced fault mitigation capabilities. On top of a complex control architecture, proactive resilient scheduling optimizes the operation of such grids in advance. Although several scheduling algorithms include measures to limit the effects of faults, the impact of proactive scheduling on the system resilience is not widely assessed. This work presents an advanced simulation-based assessment method that includes an extended power flow formulation to consider low-level control and device capabilities even in islanded mode. A case study assesses resilience gains and costs of proactive scheduling based on multiple algorithms and an extensive set of operating conditions. It turned out that even on a suitable test grid that is specifically designed to challenge scheduling algorithms, a large share of the faults can already be handled by low-level controls without the need of considering them in scheduling. However, the remaining share of unhandled faults can be well influenced by advanced proactive scheduling algorithms and an appropriate resilience constraint formulation. Given the evaluation results, it can be supported that in less critical applications, scheduling focuses on economic aspects only without considering fault mitigation. Nevertheless, a detailed assessment is needed to justify the algorithmic choice and to improve the quality of resilient algorithms. The presented method adds a tool that can efficiently assess the value of proactive scheduling based on extensive simulations.


DG, a
Set and index of controllable generators.

ST, b
Set and index of storage units.

E(e)
Observed share of event e.

I. INTRODUCTION
Most power systems are faced with fundamental transitions that will drastically alter the way electricity grids are planned and operated. Microgrids and multi-microgrids provide one solution to facilitate an increasing number of volatile Renewable Energy Sources (RES), to rigorously exploit the economic potential of Distributed Energy Resources (DERs), and simultaneously to strengthen the system resilience [1].
In favor of other competing definitions, this work defines microgrids as tightly integrated electrical networks that can be both operated as islanded and grid-connected systems [2], [3]. Multi-microgrids extend the concept of individual microgrids by jointly operating them within a distribution system. Despite the high potential in integrating renewables, several microgrid designs still heavily rely on the presence of fossil-fueled generation [4]. Due to policies towards a net-zero CO 2 economy, the integration of large shares of RES in microgrids and further reduction of CO 2 emission became a priority in research [5]. In literature, a multitude of control approaches are presented to preserve or even increase system resilience while incorporating significant amounts of stochastic generation. Several proactive scheduling approaches, for instance, are presented which balance increasing reserve needs and strengthen the microgrid operation before faults are encountered [3], [6]. Although most of the proactive algorithms follow an optimization-based framework, a broad diversity of problem formulations and solution methods are found. Common differences between algorithms include the level of detail, i.e. the number and abstraction of phenomena that are considered at scheduling time. For instance, [7] focused on provisional microgrids that depend on the grid-forming capabilities of adjacent microgrids, but did not include physical power flow restrictions beyond static bounds. On the contrary, [8] considered detailed voltage and current constraints based on the highly nonlinear AC power flow equations. Commonly, scheduling algorithms are deployed on top of a complex control architecture that manages short-term disturbances, coordinates transitions from and to the islanded mode, and ensures a stable operation of the system [9]. It was shown that scheduling and control decisions can have a significant impact on the stable and safe operation of microgrids [10]. Therefore, several algorithms included physical constraints in their scheduling decisions [11]. However, only very few of them considered the low-level control such as primary frequency regulation or fault reconfiguration algorithms. One of these approaches is introduced by [12] that includes primary frequency control constraints to ensure successful islanding, but did not consider reactive power and voltage control requirements. More recently, [3] proposed a hybrid scheduling mechanism that considers both frequency and voltage control requirements in day-ahead scheduling. Yet, storage units are excluded from primary control and saturation effects due to power limits are not covered in detail.
To evaluate such algorithms, several testbeds are implemented that enable the assessment of critical aspects such as islanding, synchronization, and stability [9], [13]. A broad range of assessment methods including purely simulationbased approaches, hardware-in-the-loop solutions, and field trials can be found. For instance, [14] implements a purely simulation-based testbed to study transient phenomena in exclusively inverter-based microgrids, but does not focus on long-term operation and scheduling. A laboratory-scale testbed that specifically focuses on scheduling is described in [15]. The authors compare the performance of an energy management heuristic to an optimal scheduling formulation and provide first insights into the economic benefits of the optimization-based approach. Yet, only 15 operating scenarios originating from five independent measurement days were used in the economic assessment. Due to the focus on a small, single-bus microgrid, grid reconfiguration actions and the impact of scheduling on physical grid constraints are beyond the scope of [15].
In general, very few approaches specifically target the evaluation of scheduling algorithms in long-term operation. Commonly, the approaches are evaluated on a very limited set of environmental conditions without taking the impact of failure scenarios, detailed forecasting models, and low-level controls on the physical grid operation into account [11], [15]. Due to the limited evaluation, little quantitative evidence on the long-term benefits of proactive and resilient scheduling is collected. Specifically, in the presence of low-level controls such as primary frequency control and heuristic grid reconfiguration schemes, it is not well understood as to how much intelligence regarding modeling details and solution methodologies is needed on scheduling level to resiliently operate microgrid and multi-microgrid systems. Still, previous studies give a first indication on possible resilience improvements but also increased operation costs and considerable computational burden [3], [4], [11], [16].
Dynamic, transient simulations are well suited to assess the performance of low-level controls in detail [9], [14], but high modeling efforts and the considerable computational costs hinder their application in long-term assessment. Steadystate power flow computations are a common method to reduce the computational burden, but classical formulations are not well suited for islanded microgrids [17], [18]. Several methods that allow modeling of distributed frequency and voltage control without dedicated slack nodes are already developed. For instance, [17] presents a balanced power flow formulation. Similarly, [18] introduces droop-based voltage and frequency control for both balanced and unbalanced grids. To improve convergence of the unbalanced network equations, an extended Newton Raphson algorithm is developed. Despite considerable effort, device constraints, RES curtailment, dynamic droop coefficients, and outage conditions are rarely considered in islanded power flows. However, detailed assessment methods covering these aspects are needed to guide future implementation and research efforts in proactive multi-microgrid scheduling.

A. CONTRIBUTIONS TO POWER SYSTEM RESILIENCE
This work investigates the operation performance of various scheduling algorithms on a comprehensive simulation-based testbed and specifically addresses the proactive consideration of network failures, low-level controls and physical constraints. To the best of our knowledge, for the first time, the impact of day-ahead scheduling formulations on system resilience is quantified based on a large-scale assessment that handles a broad range of operating conditions. A dedicated focus is put on phenomena such as voltage constraints and low-level controls that can, but may not be considered at scheduling time. Due to the large-scale evaluation covering hundred-thousands of scenarios, detailed quantitative insights into the impacts of proactive scheduling are provided. Such impacts include the system performance in case of asset failures and the costs in normal operation. All performance metrics are based on an independent set of simulation runs and do not rely on indicators that are directly returned by the scheduling algorithms.
To efficiently cover a broad range of operating conditions traditional power flow computations are significantly extended to consider dynamic droop controls, RES curtailment, detailed device capabilities, and outage conditions in an islanded grid. In contrast to dynamic simulations, the presented steady-state formulations do not require modeling of dynamic aspects such as time constants and were successfully applied in long-term assessments. Additional real-time controls that are hardly considered in the related scheduling literature include heuristic secondary control and fault rerouting. Hence, this work provides a first indication whether such facilities can reduce the need for resilience considerations at scheduling time and the resulting computational burden.
In contrast to the state-of-the-art that commonly considers only simple statistical models to characterize forecasting deviations, separate measurement and forecasting data sources are used. Required scheduling inputs are based on numerical weather prediction, while independent measurement data are taken to assess the real-time performance. Due to the clean separation, systematic and correlated forecasting deviations can be considered and common simplifications such as temporally independent errors are avoided. A rich set of failure scenarios that far exceeds the conditions reflected in the scheduling algorithms is induced. Such failures include single line outages that can be tackled by realtime control but may result in unexpected topologies and multiasset outages that split the grid into independent subgrids. Hence, the assessment specifically includes conditions that are originally not foreseen by the scheduling algorithms.

B. ORGANIZATION
The remaining part of the work is organized as follows: Section II gives a detailed, formal description of the microgrid operation problem including physical asset models and considered control impacts. In section III, the simulationbased assessment methods are thoughtfully described and in section IV, the method is applied in a case study to evaluate the value of proactive scheduling. Section V discusses the study results and section VI concludes this work.

II. MICROGRID OPERATION PROBLEM FORMULATION
Microgrids are typically composed of an interlinked control architecture that keeps parameters such as the system frequency and bus voltages stable, mitigates faults, and ensures economic operation. A brief overview on the control architecture as well as the testbed that assesses the control approaches of this work is provides in Fig. 1. Within the control architecture, scheduling algorithms commonly optimize the microgrid operation with respect to the current state and predicted conditions in advance [11]. At the end of the scheduling horizon or as soon as updates are available, computations are repeated and new setpoints are applied. To ensure a maximum compatibility with existing approaches and to account for daily updated forecasting data [19], this work assumes that scheduling decisions are computed once and are not updated afterwards.
Due to the high computational complexity of the scheduling problem [3], low-level controls that quickly balance out disturbances are needed. This study assumes that the system frequency is controlled by P-of-f droop (i.e., P(f )) and that nodal voltages are influenced by Q-of-U droop (i.e., Q(U )) of participating generators. It is also assumed that storages are elected as grid forming devices in the islanded mode and that a transition into that mode is feasible. Since these grid forming devices require reserve capacity to balance out short-term fluctuations [15], a dynamic droop scheme is used that alters the active power share each storage is providing, according to the current State of Charge (SoC). On top of the droop-based primary control, a heuristic secondary control is established that modifies the high-level scheduling decisions in case insufficient reserve capacity is detected. Additionally, a reconfiguration algorithm modifies tie-line switch states to mitigate the impact of tripping power lines and to reduce the amount of unsupplied load. Since in a practical implementation, all low-level controls need to be operated in real-time, only polynomial-time heuristics are applied.
It is assumed that all low-level controls stably operate the microgrid. Hence, the assessment focuses on the steady-state impact. Transient studies that are needed to ensure a stable operation, islanding, and reconnection of the microgrid are well beyond the scope of this work. In contrast to scheduling that operates on forecasts only, it is also assumed that all lowlevel controls have access to real-time measurements and the previously calculated setpoints. In addition, it is assumed that topological information including fault locations is available in real-time. As illustrated in Fig. 1, the performance of the microgrid is assessed by a series of power flow calculations that incorporate the steady-state impact of low-level control approaches and detailed device constraints. After each power flow calculation, the storage states and secondary control actions are updated and the subsequent calculation is started.
Although related work introduces, several specific asset types such as controllable loads, Electric Vehicles (EVs) and micro turbines [7], [20], [21], this study focuses on the most common assets [11] to simplify the interpretation of results. Two generic, schedulable asset types, Distributed Generators (DGs) that can be freely controlled within their limits and storage units that depend on the current SoC are modeled. Additionally, two volatile RES (PV and WTs) as well as uncontrollable loads are included. All asset types are reflected in the scheduling formulations and in the independent evaluation. However, the level of detail between input data sources and considered failure modes differ significantly in the scheduling and evaluation formulation.

A. VOLATILE RENEWABLES AND LOADS
Volatile RES and loads are modeled by two different sets of input variables. One,p ,v i,t , v ∈ {PV, WT, LD} describes the PV, WT, and load forecasts that are available at scheduling time. On the contrary,P ,v i,t describes the measurements that VOLUME 10, 2022 are available in real-time only. It is assumed that load forecasts and measurements are directly available, e.g. in terms of standard load profiles and smart meter measurements, whereas the amount of PV and wind power is computed based on meteorological forecasts and observations. Due to the broad availability of meteorological measurements, this study calculates both forecast and measurement based on asset models. Nevertheless, the real-time RES models can be substituted by direct power measurements, in case sufficient on-site data is available.
The available output power of WT w, is calculated by the turbine curve ρ WT w that translates the wind speed into the turbines output power. Given the wind speed forecasts and measurements at time t, ν w,t and V w,t , respectively, the available power is given by p ,WT . Following related work [21], the PV output of plant c is modeled proportionally to the in-plane irradiance forecast g c,t and measurement G c,t . Furthermore, outputs are corrected by an optional temperature coefficient k PV c utilizing the deviation of the array temperatures τ PV c,t and T PV c,t , respectively from the nominal temperature T * ,PV c . Equations (1) and (2) show the PV generation model.
Based on the forecasts p ,WT w,t , p ,PV c,t , and p ,LD l,t as well as the initial storage conditions e ST b,−1 , the scheduling algorithm S(·) calculates the control setpoints p DG a,t , o DG a,t , and p ST b,t . To model the level of details that are considered by an algorithm S(·) and to assess the impact on the microgrid operation, different formulations based on prior work [3] are considered. A detailed formulation of the algorithms can be found in the original publication that assesses the computational performance but does not focus on operational aspects.

1) ECONOMIC SCHEDULING S EC (·)
The least level of detail is modeled by a purely economic Mixed Integer Linear Programming (MILP) formulation of the scheduling problem that neither includes grid constraints nor considers reserves that are needed for a successful islanding transition. Storage units b ∈ BS modeled in (3) to (7)  . Storage losses are included in a constant round- DG units a ∈ DG are constrained by the minimal and maximal active power, p DG a and p DG a as given in (8).
Loads and RES are included by their expected power demand and output without considering any emergency measures. Main grid transfers are considered by directional variables p BUY , and p SELL t as well as a directional indicator o SELL t ∈ B as shown in (9) and (10).
For each time step, a simple active power balance (11) reduces the topology to one single bus without including topological information of physical effects such as losses.
The overall objective is to minimize the operating costs c TOT determined by the power setpoints and the DG operating costs c DG a as well as main grid transfer costs c BUY t and benefits c SELL t within the scheduling horizon. (12) All computations are based on deterministic forecasts without considering stochastic fluctuations and associated risks.

2) RESERVE-AWARE SCHEDULING S RE (·)
In addition to economic scheduling, S RE (·) includes further constraints which ensure that enough storage capacity and spinning reserve is available to sustain a main grid outage until further DG can be started. The reserve constraints in [3] are slightly extended by a scenario-based formulation that introduces safety coefficients and accounts for secondarycontrol delays. For each time step t ∈ T and storage b ∈ ST, the emergency power p E,DCH b,t that can be provided until additional generation is started and the power that can be maximally absorbed p E,CHG b,t until excess generation is stopped is modeled. Both variables are constrained by the storage state and its power ratings as shown in (13) to (16).
Given the reserve coefficients k v,RE a of asset a and scenario v, the net load including RES p v,NetLD t is first defined by (17).
The reserve requirements are then modeled as (18) and (19).
In addition to the economic and reserve-constrained formulation, the physics-constrained algorithm asserts that the power flow must converge for the given setpoints and that voltage, frequency, and loading limits are met. In contrast to [3] that uses a commercial power system simulator to execute the embedded power flow calculations, this work includes the extended formulation as given in Sections II-C to II-G. Nevertheless, all volatile inputs p s,PV c,t , p s,WT w,t , p s,LD l,t are based on a static set of worst-case scenarios s that is generated from the available forecasts only. For each scenario, the AC power flow is solved. The resulting bus voltage levels u s i,t , i ∈ BS and line current magnitudes ι s i,t , i ∈ LI are constrained as u ≤ u s i,t ≤ū and ι s i,t ≤ῑ i , respectively. In addition, the frequency f s i,t on each island i and scenario s needs to be within its permissible limits f ≤ f s i,t ≤f .

C. SECONDARY CONTROL
To assess the impact of day-ahead scheduling decisions on the emergency operation, the steady-state impact of the most essential low-level controls is modeled. Primary control alters the active power generation setpoints to balance out short-term fluctuations.
a,t . For storage units b, additionally, the maximum power that can be supplied or absorbed until secondary control actions take effect, is considered.
To estimate the maximum power that can be provided or absorbed for a period of T , (20) and (21) define the power limit heuristics,P E b (E, T ) and P E b (E, T ), at a storage state E and the efficiency curves µ CHG b (P) and µ DCH b (P).
if M = ∅ then 6: n ← argmax i∈M (P * i ) 7: Since the storage efficiency depends on the output power itself, a worst-case efficiency is assumed to limit convergence issues while solving the equations. Given the dynamic power limits based on the storage state, the reserve requests (22) and (23) are calculated by the nominal output power range and the power that cannot be provided due to energy limits.
To compute the secondary control actions, first, the reserve power requests for each island i, P R,v i,t are computed by (24).
The secondary control algorithm SEC implements a greedy heuristic that changes the DG status setpoints o DG a,t to closely meet the reserve request. In each iteration, one DG status is altered that shifts the remaining reserve requirement closest to zero. Algorithm 1 defines the procedure for a single island and one reserve request direction in more detail. Since the set of candidate machines M decreases monotonically, it can be seen that the computations terminate within polynomial time.
In case an island i shows a power surplus, i.e. f i > f * , SEC is applied to the inverted operating status ¬o DG i,t of all DG units on that island to compute the assets that need to be shut down. Equation (25) In case a DG unit a is newly scheduled, (26) will apply the nominal output value as power set point P •,DG a,t .

D. PRIMARY CONTROL
In islanded operation, short-term fluctuations are commonly balanced by droop-based real-time control [17], [18]. As illustrated in Fig. 1, the steady-state impacts of primary control are considered in the extended load flow. Each operational DG a adjusts its active power setpoint P •,DG a,t according to the locally measured frequency f i,t . Since the model focuses on the steady state, for each electrically connected island in the microgrid, a single frequency variable is introduced. Given the topology function IL j that returns the island of asset j as well as the droop coefficient k f,DG a the primary frequency control is modeled as (27).
In addition to DG, also storage units b directly contribute to primary frequency control. However, the units implement a dynamic scheme that gradually reduces the droop To quickly enter the nominal operating range again, the reduction further depends on the sign of the frequency deviation as given in (28) and (29).
In contrast to DG and storage units, it is assumed that volatile RES do not participate in regular frequency control. However, a limited frequency sensitive mode for overfrequency events following [22] is implemented to reduce the infeed in case of severe over-frequency events. Considering the nominal operating boundaryf * , the output power of asset i type v ∈ {PV, WT} is calculated as (30).
For each asset j having type v, the topology function v j specifies the bus j is connected to. The reactive power setpoint Q •,v i,t of all generation units i typed v ∈ {DG, ST, PV, WT} is controlled by a static Q-of-u droop k u,v i and the locally measured voltage magnitudes U v i ,t as modeled by (31).

E. DEVICE CONSTRAINTS
For each generation unit i of type v a set of active and apparent power limits is introduced to model saturation effects in the power flow computations. In general, the active power setpoints from the primary control, P •,v i,t are directly limited by the minimal and maximal supported active power P v i andP v i , respectively. The active power takes precedence over reactive power outputs that are curtailed to limit the total apparent The active power output of volatile RES is specifically defined by (32) that considers optional inverter constraints by an additional limitP v i .
The DG model (33)  that is stored at time t. Given the charging and discharging efficiency curves µ CHG b (P) and µ DCH b (P) the storage state is advanced by (34).
Limited storage capacity is accounted for by the energydependent power boundaries (21) and (20), respectively. Hence, the active output power is modeled as (35).
Given the active output power of asset i, the reactive power limitQ v i,t of all asset types v is calculated as (36) and the reactive output power Q v i,t as (37).

F. PHYSICAL GRID MODEL
In case an electrically connected part of the grid i is itself connected to the main grid, a Point of Common Coupling (PCC) is modeled by two slack variables P EX i,t , Q EX i,t and a constant voltage U EX i ,t on the connected bus. Simultaneously, the frequency is fixed to f i = f * in order to model inactive primary and secondary controls. In case the electrically connected island i is not itself connected to any external grid, f i is kept as a free variable that models a distributed slack. To reduce nonconverging power flows due to the detailed saturation model, for each island, an emergency model is introduced. As soon as the system frequency exceeds the permitted range, the virtual emergency power P EM i,t models the power that would be needed to stabilize the system. To support the convergence of the entire power flow, (38) introduces an emergency droop k f,EM OB that determines the power in case the frequency exceeds the permitted band and a small but positive droop heuristic k f,EM IB , k f,EM IB k f,EM OB , that additionally supports convergence.
The injected active and reactive net power of each bus, P BS i,t and Q BS i,t , respectively is calculated as (39) and (40). Note that V is defined as the set of all generation units V = {DG, ST, PV, WT, EX, EM} including any external grid connections and the virtual emergency power source.
The basis of the microgrid model is then given by the wellknown AC power flow equations. To strengthen the comparability to related work [3], [11], [21], the balanced power flow model is used. For each bus i ∈ BS, a voltage magnitude U i,t and angle ϕ i,t is introduced. Given P BS i,t and Q BS i,t as well as the admittance matrix entries for the buses i, j ∈ BS, |Y | i,j θ i,j , the power flow equations can be given by (41) and (42) [17], [18].

G. EMERGENCY GRID RECONFIGURATION
A grid reconfiguration scheme models the effect of realtime topology reconfiguration actions that isolate faults and reconnect the remaining sections, if possible. It is assumed that all tie-line switches can be remotely controlled well below the simulation step size T St . Furthermore, it is assumed that all faults can be located and isolated such that no healthy section of the network is directly affected. At the beginning of each scenario and after each topological change (i.e., faults or repair actions), the reconfiguration heuristic is executed. The main goal is to establish a maximally connected, healthy, and radial network. Hence, islanding will be avoided, in case an external grid connection is feasible and each island will be as large as possible to share available power reserves. Since the study focuses on the steady-state effects only, it is assumed that all configurations can be stably operated and that grid forming and black-start is adequately addressed within each island having at least one operational DG or storage unit. The grid reconfiguration task is mapped to a minimal spanning forest problem that is solved in polynomial time using Prim's algorithm [23]. Each line l is mapped to an edge of the graph and the edge weight c LI l,t is guided by the line admittance after clearing the fault Y l,t . To limit the number of switching operations and to account for lines that cannot be isolated by remotely operated switches, the initial operating status of line l connecting bus j and i, O LI l,0 is considered in the weight heuristic (43) as well.

III. BENCHMARKING METHODS
One of the research goals is to quantify the impact of scheduling algorithms on the complex operation of a multi-microgrid and the resulting system resilience. To study the long-term effects, a simulation-based study that focuses on steadystate phenomena is chosen. Fig. 1 shows the main components of the assessment method including the dedicated grid simulation. For all algorithms under test, a common set of input conditions (e.g., forecasts and the corresponding measurements) is generated and the impacts of the scheduling decisions are independently evaluated. Due to the identical inputs, the results can be directly compared without considering stochastic fluctuations among test runs. In contrast to the preliminary work [24] that describes the concepts of a microgrid testbed, this work significantly refines the models, drastically increases the number of considered conditions, and presents detailed results on several algorithms. Since fault mitigation options and consequently the impact on the system resilience largely depend on the considered grid and included assets, the scheduling algorithm needs to be chosen according to local requirements. For instance, a network that is designed to accept all scheduling states needs less consideration than a grid that is operated close to its limits. The presented method targets the efficient case-specific evaluation by a generalized assessment framework that solves the system model given in Section II.

A. SCENARIO GENERATION
The assessment requires an extensive set of inputs including dynamic grid prices, environmental conditions and load profiles. Since several inputs such as solar irradiation and wind speed [25], [26] show a considerable temporal correlation, first, a subset of scheduling time frames is selected from the available days in the long-term measurement and forecast series. According to each of the absolute time frames, the input measurements and forecasts will be selected without the need of reducing the long-term time series to a consecutive period. Since the inputs are based on common time frames, the correlations among different data sources such as seasonal effects on energy consumption are modeled as well.
In contrast to the RES generation forecasts that are based on numerical weather predictions targeting the particular measurement time and location, generation forecasts are based on generic profiles. Hence, possibly sensitive information that is needed to model user behavior and load forecasts can be kept at a minimum. Such information on loads includes the type of load (e.g., households and agricultural load) and the yearly energy consumption, only.
The environment conditions are amended by a detailed set of failure scenarios that are exposed to the real-time models only. Each failure scenario temporarily alters the operating status of selected assets such as lines and the external grid connection and may trigger real-time actions such as grid reconfiguration. All failure scenarios are considered as rare events that cannot be well quantified in a limited Monte Carlo simulation. To specifically focus on the system resilience in such rare events, for each set of environmental input conditions, all failure scenarios as well as a reference scenario without any fault are applied.

B. SIMULATION-BASED ASSESSMENT
For each previously defined input scenario, the RES generation is predicted and a dedicated scheduling run using the algorithm under test is conducted. All algorithms under test follow an optimization based approach and therefore the cost minimization problems defined in Section II-B are solved. All MILP formulations are directly solved by exact mathematical programming techniques. In case a problem turns out to be infeasible (e.g., due to its reserve requirements), a default output that does not schedule any generation at all is returned and the microgrid is operated by its real-time controls, only. The highly nonlinear physical constraint formulation cannot be solved by a MILP solver and therefore the hybrid heuristic optimization technique defined in [3] is applied. In case no feasible solution that satisfies all constraints is found by the heuristic method, the best known schedule that may still result in some constraint violations is used in the assessment.
Given the results of the scheduling run, the failure scenarios are applied and for each set of real-time conditions, the independent evaluation of the real-time operation is conducted. At the beginning of each scenario and after status changes, the fault reconfiguration algorithm is executed and the topological information including the admittance matrix Y and connected assets are computed. Afterwards, the system model including primary and secondary control is solved in a series of power flow computations. For each time step, a dedicated computation is triggered and the internal states such as the secondary control setpoints as well as the storage states are updated. The set of equations that describe the system state as defined in Section II are numerically solved by the hybrid root-finding algorithm of [27].

C. PERFORMANCE METRICS
The quality of all scheduling algorithms is quantified by the impacts on real-time operation of the network and whether the most important grid constraints can be met. As such, it is evaluated whether the bus voltages are within the permitted voltage range and whether overloading of assets such as lines is observed. The occurrence of such constraint violation events is addressed by the rate E(e) that counts the share of events e on the total number of time instants in the set of interest. For instance, E(U s i,t < U i ) gives the ratio of undervoltage events to the total number of time steps at bus i. Similar aggregations are conducted for overload events E(I s i,t >Ī i ) of line i as well. Additionally, the fault mitigation rate E(mtg), i.e., the share of time steps in the fault duration that can fully avoid any voltage, frequency, and loading violation is defined as (44).
The mitigation rate indicates performance improvements compared to statically operated distribution systems that cannot automatically mitigate any fault. In contrast to the other event rates, E(mtg) specifically focuses on the system performance in times of induced failure conditions without considering other outages due to improper operation and scheduling decisions. Although the event rates E(e) well quantify the number of constraint violations, the impact of such events is not well covered. One common metric to describe the impact of any violation on the supplied loads is the (expected) energy not served E NS,s that describes the amount of energy that cannot be supplied due to outage conditions in scenario s [11], [28]. Since this work does not rely on probabilistic failure models, the unsupplied energy E NS,s is always aggregated given a certain failure mode such as main-grid outages. Following the definition of E(mtg), outage conditions include severe voltage and frequency band violations beyond a given threshold as well as overload events that are assumed to trigger an immediate shutdown of electrically connected subgrids. Note that a detailed model of the protection system that includes cascading faults exceeds the scope of this work by far. Therefore, it is assumed that the status of all assets is tightly monitored and that any constraint violation immediately triggers a complete loss of load on the subgrid without considering further degraded states.
To assess the economic performance of any scheduling algorithm, the total operating costs as encountered in the independent grid simulation, C TOT,s of scenario s are taken. Hence, C TOT,s incorporates forecasting deviations and does not rely on the cost estimate committed at scheduling time.

IV. CASE STUDY
The case study aims at demonstrating the large-scale assessment method and giving first detailed insights into the performance of several scheduling algorithms. Three base algorithms are selected that represent different levels of detail and complexity. The first one implements simple economic scheduling without considering resilience or forecasting deviations, the second one includes linear sufficiency constraints that target a successful islanding, and the most complex algorithm adds nonlinear grid constraints. In addition, several algorithmic variants that study the impact of worst-case formulations and forecasting deviations are considered.
All algorithms were evaluated on a common test system that is specifically designed to challenge the algorithm under test and to trigger extreme cases that may not be found in other distribution systems. In contrast to related work, the case study covers a rich set of operating conditions and performs a large-scale assessment of manifold failure scenarios. In the following, a detailed description of the test system as well as the evaluation results of all algorithms are given.

A. BENCHMARK SYSTEM
The topology of the benchmark system is based on a commonly used test grid called Baran test feeder that was specifically designed to challenge algorithms under test [3], [11], [29]- [31]. Although the test system is widely used in scheduling, several authors include extensions to fully support the assessment of multi-microgrids. This work follows the extensions of [3] but increases the share of volatile RES and available storage capacity to specifically focus on highly loaded, low-emission power systems. In addition, tie-lines and switches that are present in the original Baran test feeder [29] are modeled in this work as well. Fig. 2 shows the network topology including loads, generation units, tie-lines, and switches. It is assumed that every switch in the diagram can be remotely operated by the reconfiguration algorithm. The detailed parameters of all schedulable generation units can be directly found in [3]. PV and WTs are increased to a maximum apparent power ofS PV = 0 MWh, each. To avoid frequent deep discharge and provide additional operation reserves, the upper and lower capacity limits for scheduling are set to 95% and 5% of the total capacity, respectively. In addition to a constant storage efficiency for scheduling as described in [3], a detailed efficiency curve according to [32] and [33] is included in the physical grid model. Following [3] and [34], Q-of-U control scales (i.e., Q(U )) the maximal reactive power between 0.92 p.u. and 1.08 p.u. for all active generation units. Likewise, the P-of-f droop (i.e., P(f )) is chosen s.t. the whole operating range of all active DG and storage units is covered within a ±200 mHz deviation range. A nominal storage rangeĒ * ,ST b to E * ,ST b of 0.8 p.u. to 0.2 p.u is configured for all storage units. The permissible voltage and frequency limits that trigger loss of load and generation are set to 0.9 p.u., 1.1 p.u. and ±400 mHz, respectively.
The reforecasting dataset [19] that covers several decades of state-of-the-art forecasting outputs with historic data represents the scheduling-time predictions. The forecasts are spatially and temporally aligned with the measurements from [35]- [39] which are taken to model the full dynamics of meteorological phenomena in high temporal resolution. WT curves are taken from [40] and the nominal inplane irradiance G * is set to 1 kW m 2 . Fig. 3 illustrates the statistical distribution of the accumulated volatile generation calculated from the measurement series. For each daytime, the boxplot shows the total generation quartiles excluding outliers as calculated by [41] and the average generation over all scenarios as green triangle. Clearly, the daytime pattern induced by the PV generation is visible. Load forecasts are modeled by the static load profiles [42] that match the measurement profiles taken from [43]. For all scenarios, the total real-time load is illustrated in the boxplot of Fig. 4. Day-ahead prices are available at [44] and illustrated in Fig. 5. The operating costs of DGs are directly taken from [3]. Table 1 shows the forecasting error for all asset types relative to the maximum output power. For convenience and to ease comparison to other datasets, the evaluation includes the standard error deviation and the Root-Mean-Square Error (RMSE) in addition to mean absolute error. For PV outputs, both the daytime and whole-day error including trivial night-time predictions are given.
To assess the performance in case of contingencies, several main grid, single line, and branch faults are modeled. However, to keep the assessment computationally tractable, no exhaustive failure definition is applied. Instead, Table 2 shows the faulty assets in each category. Note that all wholebranch faults isolate a section of the grid that needs to be operated in islanded mode. On the contrary, the studied single   line fault can always be compensated by grid reconfiguration. According to related work, a fault clearance time of three hours was modeled [21], [30]. For each faulty asset, eight different incident times covering the entire scheduling period result in 144 failure cases and one normal operating case. Given the sample size of 365 environmental scenarios, a total number of 52,925 scenarios per algorithm is covered. Similar to the related work on complex power flow computations in islanded systems [17], [18], a total share of 0.016% of all power flows does not converge. Consequently, scenarios with non-converging power flows were removed from the evaluation and are not considered in the metrics.

B. ECONOMIC SCHEDULING
A purely economic scheduling algorithm S EC (·) that does not include any resilience constraints at all establishes the baseline for resilient multi-microgrid scheduling. Fig. 6 to 8 show the constraint violation rates for overvoltage E(U s i,t >Ū * ), undervoltage E(U s i,t < U * ), and overload events E(I s i,t >Ī i ), respectively. Note that the voltage-related events consider the tighter scheduling-time bounds ofŪ * = 0.95 p.u. and U * = 1.05 p.u. aligning to the same safety margins as physicsconstraint scheduling. Nevertheless, average unserved energy E NS shown in Fig. 9 considers the wider protection-related limits to compute the amount of lost load. In case an algorithm avoids all constraint violations of a particular type, no statistics are shown in the graphics.
One can observe that the purely economic algorithm does not adhere to the tight voltage band used for scheduling and consequently shows a considerable number of overvoltage events near WT2 for all failure types and normal operation. Given the wider safety-related voltage limits, no violation in normal operation mode and only a marginal maximum rate event of 0.029% per asset in case of single-line faults are seen. Similarly, only a few undervoltage events that mostly occur on islanding faults are observed for both bounds. Since the network is designed to host nominal loads without overload events, all failures that do not involve grid reconfiguration actions can be tolerated without overload events. However, for single-line faults, a considerable overload rate of up to 0.11% is observed. Fig. 8 indicates that due to the reconfiguration actions and the nature of the test grid in challenging algorithms under test, small sized lines such as line 18 as well as tie lines 35 and 36 are mostly affected. Similar overload events can be observed on whole-branch failures that include grid reconfiguration actions as well. Fig. 9 shows the average unserved energy E NS per day and failure type. No unserved load is observed in normal operating scenarios and single-line faults do not trigger as much loss of load as incidents that result in islanding actions. To relate the observed loss E NS to the best known solutions, a lower bound given all assessed algorithms is calculated. For each input scenario, the best known solution having the least unserved energy is taken. The lower bound itself also includes reference runs that cannot be practically implemented and therefore only serves as a theoretical guidance metric that describes the best known system performance.
The fault mitigation rates of the economic scheduling algorithm and all failure types are listed in Table 3. It can be seen that a large share of single line faults are handled by the grid reconfiguration algorithm without any indicated voltage, frequency band, and loading violation but that some failure conditions cannot be avoided. Specifically, for maingrid and whole-branch failures that operate parts of the grid in islanded mode, slightly reduced mitigation rates are observed.     Given the purely economic scheduling results, it can be seen that already a large share of faults is compensated by the low-level controls without the need of considering them in scheduling.
Due to the economic scheduling formulation, no infeasible scenario is detected and hence, no fallback schedule is used. Fig. 10 shows a boxplot of the operating cost distribution achieved in purely economic scheduling in normal operation. One can see that due to the high share of RES, several days have negative operating costs. In average, a financial baseline of $674.89 per day is established.

C. RESERVE CONSTRAINT SCHEDULING
The reserve-aware scheduling formulation S RE (·) adds linear sufficiency constraints that manage available storage and spinning DG reserves until secondary control can take further actions. Three variations of the sufficiency-based formulation are assessed. The first one, deterministic sufficiency-based scheduling, solely applies the constraints to the nominal operation scenario as predicted without taking any deviations into account. The second one, the robust sufficiency-based algorithm defines two worst-case scenarios that both need to be covered by scheduled reserves. Following related work [3], a maximum load case assumes 20% reduction of volatile RES generation and 20% increase of all loads. Similarly, a maximum generation case alters all loads and RES power outputs by a factor of 0.8 and 1.2, respectively. To specifically study the impact of forecasting deviations on the results, a third sufficiency-based scheduling run with perfect predictions is added. Naturally, the perfect run only serves as a best-case reference that cannot be reached with realistic forecasts. Fig. 6 to 10 and Table 3 include the results of all reserve-constraint scheduling runs. Similar to purely economic scheduling, no overload event in normal operation is seen. However, for deterministic, robust, and perfect scheduling, maximum overload rates of 0.11%, 0.1%, 0.11%, respectively are observed with single-line faults. Again, the narrow scheduling-related voltage band does not hold and all algorithms show overvoltage events. However, in the safetyrelated wider band, only at single-line faults, overvoltage events are encountered with a maximum event rate of 0.029%, and 0.026% per asset for the deterministic and robust case, respectively. Most undervoltage-related events are observed at whole-branch faults that are not targeted by the formulation itself. Nevertheless, even with the 0.9 p.u. limit, undervoltage events at whole-branch faults are encountered for the deterministic and perfect variations having maximum rates per asset of 0.0017% and 0.0037%, respectively. Still, few undervoltage events regarding the narrow scheduling-related voltage band are seen at single-line faults (deterministic and robust) and even at main grid faults (perfect forecasts), but none of them are visible in the safety-related statistics.
Although the sufficiency-based variations consider main grid outages, a considerable amount of lost load is encountered for all three variations. However, only a marginal amount of 0.00%, 4.03% and 0.00% of all violations at the deterministic, robust and perfect scheduling algorithm, respectively can be traced back to infeasible problems. In total, 100.00%, 96.99%, 100.00%, respectively of the scheduling runs are feasible. Given the observation that even perfect forecasting without any infeasible schedules shows a significant amount of unserved energy, it is demonstrated that the linear approximation does not fully prevent outages. As illustrated in Fig. 10, the deterministic, robust and perfect sufficiency-based scheduling show the average operating costs of $675.64, $679.93, and $642.02, respectively.

D. PHYSICS CONSTRAINT SCHEDULING
Physics-aware hybrid scheduling S PH (·) follows the same worst-case assumption as the reserve-constraint formulation S RE (·), but additionally considers voltage, frequency and loading constraints of the detailed power flow model. Again, the impact of forecasting deviations on the scheduling performance is studied by a reference run that assumes a perfect forecast instead of the detailed prediction data. Due to the comprehensive constraints, in total, 74.25% and 82.19% of all hybrid and perfect hybrid runs, respectively, converge to a feasible solution. For all other cases, the best known solution instead of a generic default schedule is taken as a basis for further evaluations.
Both hybrid variations show few violations of the scheduling-related tight overvoltage bound, but none of the algorithms manages to avoid constraint violations at all. On the contrary, both configurations avoid undervoltage constraint violations except for whole-branch failures. Given the wider voltage bounds, only a few overvoltage events (with a maximum event rate of 0.011% per asset) in case of singleline failures and even less undervoltage events (with a maximum rate of 0.0020% per asset) in whole-branch failures actually lead to loss of load. Again, one can observe a considerable number of overload events in case fault reconfiguration actions are taken. In particular, the hybrid and perfect hybrid algorithms show overload rates of up to 0.13%, 0.12%, respectively in case of single line faults that are not covered by the worst-case assumptions. Fig. 9 still shows a considerable amount of lost load for both algorithms in case of main-grid and whole-branch faults. Nevertheless, only 40.18% and 34.80% of the main-grid fault scenarios that show lost load for hybrid and perfect hybrid scheduling can be accounted for by infeasible and nonconvergent cases. In particular the hybrid optimization run which uses perfect forecasts demonstrates the impact of worst-case assumptions on the scheduling performance. Although the real-time measurements in the reference run are known, hybrid scheduling assumes a full-time outage as worst case while the validation step asserts three hour fault duration. Hence, the system state in the validation runs can differ from the tolerable worst-case assumption and may lead to loss of load.
As illustrated in Fig. 10, the hybrid and perfect hybrid evaluation show the average operating costs of $883.01 and $737.11, respectively. Despite the tight resilience constraints, both variants still show several scenarios in which earnings from selling excess energy or consuming electricity in case of negative grid prices outweigh the cost of generating and buying electricity.

V. DISCUSSION
In contrast to related work, this assessment covers a large variety of operating conditions and failure modes. The method includes an independent evaluation step cleanly separating the information that is available at scheduling and real-time. Hence, this work shows several detailed effects on the system resilience, such as the impact of failures that are not directly covered by the scheduling algorithms. The large-scale assessment is driven by an extended power flow formulation considering a high level of detail such as individual device constraints and low-level controls in partially islanded power systems. Since the method is based on steadystate power flows, an efficient replication without the need for dynamic models is expected.
Due to a common set of input scenarios and system configurations, the outcomes of each algorithm can be directly compared without considering stochastic fluctuations among single validation runs. Although the highly loaded benchmark system that is specifically designed to challenge algorithms under test does not show any safety-relevant events under normal operating conditions, the implemented fault mitigation measures call for active grid capacity management in abnormal cases. For instance, severe line overloading events of up to 380% are observed after grid reconfiguration measures. Since the grid is operated beyond static worst-case boundaries, either the scheduling algorithm or a dedicated dynamic grid capacity management needs to assign save operating limits for all relevant assets to avoid such violations.
Given the high fault mitigation rates of economic scheduling ranging from 87.0% at whole-branch faults that include partially islanded grids to 98.6% at single-line faults that can be rerouted, it can be seen that even in the challenging test grid a large share of events can already be handled by appropriate low-level controls. Nevertheless, a considerable influence of scheduling-time algorithms on the remaining events that cannot be fully handled by low-level control alone is found. For instance, the algorithmic choice shows significant impact on the unserved energy E NS that incorporates severe voltage and frequency violations leading to loss of load. Hybrid scheduling reduces the average lost load in case of main-grid outages by 40.5% with respect to the purely economic baseline. Similarly, robust sufficiency-based scheduling already achieves an E NS reduction of 15.5% and a slight decrease of 7.0% can still be seen in the deterministic sufficiency-constrained case.
Note that all algorithms, except the purely economic base case directly consider main grid outages but introduce different levels of abstraction to formulate the corresponding constraints. As such, the least level of abstraction including the highest level of detail (i.e., the hybrid scheduling formulation) achieves the least unserved load. Nevertheless, even in case of hybrid scheduling, necessary simplifications such as whole-day grid outages lead to a significant lost load of in average 66 kWh on all feasible hybrid scheduling runs. The increasing share of nonconverging or infeasible scheduling runs of up to 25.75% in hybrid scheduling and corresponding lost load further indicates a considerable amount of unserved energy, that cannot be avoided by studied measures. The same observation can be made by the lower bound shown in Fig. 9 indicating a significant amount of lost load scenarios that cannot be avoided by any of the scheduling algorithms.
In contrast to failures that are directly considered by the scheduling formulations, only a reduced impact of the algorithms on the system performance in case of unconsidered incidents is observed. Still, hybrid scheduling can reduce the amount of lost load by 24.3% and 15.5% for single-line and whole-branch faults, respectively. Nevertheless, other algorithms show even less performance improvement and some variations such as deterministic sufficiency-based scheduling with whole branch faults even show a reduced performance.
From a resilience point of view, all robust formulations can well handle forecasting deviations and only show marginal degradation to the idealistic counterparts that assume a perfect forecast. For instance, on main-grid faults, only a reduction in lost load of 2.5% and 2.4% for sufficiency-based and hybrid scheduling is observed when eliminating forecasting errors. In the overvoltage chart of Fig. 6, an average overvoltage rate reduction of 27.9% and 56.0% was observed for sufficiency-based and hybrid algorithms when assuming perfect forecasts, but due to safety margins needed to account for fluctuations such as those induced by the upstream grid, no major reduction in the loss of load is observed.
A more severe impact of forecasting deviations can be observed on the operating costs drawn in Fig. 10. Specifically the hybrid scheduling algorithm shows a considerable increase in the average operating costs of 19.8% for the robust variant compared to the perfect forecast. Hence, advanced forecasting techniques that reduce corresponding errors can have an impact on the economic performance of hybrid scheduling. In case of the linear formulation, only a cost increase of 5.9% of the robust variant compared to the perfect reference is seen. In general, the observed resilience gains come with an additional cost for robust sufficiencybased and hybrid scheduling of 0.7% and 30.8%, respectively. The presented large-scale evaluation method allows balancing additional costs and benefits on a detailed per-case basis.

VI. CONCLUSION AND OUTLOOK
Driven by the need of assessing the performance of resilient (multi-)microgrid scheduling algorithms, this work presents an extensive assessment method that specifically focuses on resilience aspects and the impact of scheduling decisions on real-time operation. It is successfully demonstrated that despite the complex power system model that includes primary and secondary control as well as emergency response measures, a large variety of input conditions such as failure scenarios and RES generation can be practically covered. Hence, the need for strong simplifications including limited operating scenarios is drastically reduced in practice. Although the method focuses on the individual assessment of microgrid installations, a detailed case study already provides several insights into the resilient operation of (multi-)microgrids, the impact of scheduling algorithms on the system performance, and promising research perspectives.
Even on the test system that is specifically designed to challenge scheduling algorithms under test, a large majority of the assessed failures including 94.2% of all main grid and 98.6% of all single-line faults can already be mitigated by low-level control and real-time mitigation techniques alone without considering resilience aspects in scheduling. Several practical applications that tolerate the remaining chance of lost load therefore justify to focus on purely economic scheduling without considering resilience aspects.
Nevertheless, the choice of the scheduling algorithm shows a considerable influence on the remaining outages that cannot be avoided by low-level controls alone. Specifically, an influence of the scheduling formulation including the representation of physical phenomena and failure modes on the remaining lost load is found. The advanced hybrid optimization algorithm that considers physical grid constraints and low-level control at scheduling time shows the greatest potential in reducing the impact of failures. Hence, it can be concluded that both future work on and evaluation of resilient scheduling algorithms needs to put a strong focus on the representation of physical aspects and on accurately modeling failure modes in scheduling. The independent validation step of the presented assessment method allows to address such modeling aspects without the need of directly relying on scheduling-time metrics.
Given the results from references using perfect forecasts, it can be seen that the forecasting quality has little impact on the system resilience and that the stochastic phenomena such as forecasting deviations can be well handled by a few worstcase scenarios and static safety margins. However, a considerable influence of forecasts on the economic performance is found. To further reduce operating costs, future work can put a lever on improving the accuracy of forecasts and on an improved stochastic representation. Even under perfect forecasting conditions, the strict scheduling constraints that target a full avoidance of any impacts lead to a considerable number of infeasible problems. Further research on the assessment of soft constraints permitting a certain level of degradation and additional flexibility such as load shifting needs to be undertaken to quantify the impact of such measures.
Future work on the assessment method itself includes an advanced model of the protection system that allows to consider cascading faults, more detailed models of the upstream grid affecting the (multi-)microgrid, as well as the implementation of additional real-time fault mitigation and control techniques that can integrate further flexibility. To include more detailed control and component models, further improvements on the convergence of islanded power flow computations are needed. Additionally, work on the large-scale integration of dynamic simulations can further raise the confidence in a stable operation in case stability cannot be assured otherwise. Finally, the presented evidence on the value of resilient scheduling is limited to a single thoughtfully evaluated test grid. Further research is needed to study the proactive scheduling on a large variety of networks including related benchmarks and real-world systems. This work in presenting the large-scale assessment framework lays the foundation of such investigations and provides a tool for an efficient case-specific analysis.

DER
Distributed