The Effect of Time-Varying Value on Infrastructure Resilience Assessments

Infrastructure resilience for a scenario can be assessed quantitatively from resilience curves that plot the evolution of system performance. Summary metrics map these curves to a single value to facilitate comparisons of different systems, scenarios, and policies. Commonly, these metrics are integral-based, e.g., cumulative infrastructure performance. However, since these curves and metrics only examine infrastructure performance, they fail to consider the dynamics of when stakeholders value the performance. For example, a power failure at a hospital during an ordinary day would be of less concern to emergency managers than during a hurricane recovery. This manuscript defines value weighting functions to represent the evolution of stakeholders’ value of performance. Temporal correlation between the performance and value weighting functions is described through a stochastic offset. Together, these concepts are used to define a holistic resilience metric: percent value satisfied. Through analytical and numerical approaches, percent value satisfied is treated as a resilience assessment’s stochastic output, and its distribution is compared to the naïve metric, cumulative infrastructure performance. The naïve summary metric is shown to be misleading in multiple potential scenarios; this work establishes that resilience assessments must consider the impact of time-varying stakeholder value and its correlation with infrastructure performance. These elements also provide new considerations for resilience assessments: opportunities to improve holistic system resilience without directly affecting infrastructure performance; hazard categories to inform value-weighted resilience analysis; and general insights to guide extensions from performance-based to value-weighted assessment.


I. INTRODUCTION
Infrastructure resilience analyses often examine the progression of performance measures as the system resists, absorbs, and recovers from a disruption. Denoted the resilience curve [1]- [4], this progression has been adopted as both a conceptual illustration [5]- [7] and the basis for quantitative analyses. Resilience curve-based analyses include identifying critical components [8], [9], positioning recovery resources [10], sequencing recovery actions [11]- [13], and selecting between system configurations [14]- [16]. Such quantitative analyses require definition of system performance measures and summary metrics.
Infrastructure systems exist for the provision of services [17]. Derived from system states, a performance measure describes the system's ability to provide services or the quality of those services. Three main categories of measures The associate editor coordinating the review of this manuscript and approving it for publication was Xiao-Sheng Si .
can be defined: availability, productivity, or efficacy [18]. Specific definitions vary across domains and analysis techniques. Examples include: connected electrical generation capacity [19], functional cranes at a seaport [20], volume of gas supplied [21], average vehicle speed [22], and travel time in a transportation network [23]. In most cases, measures are normalized relative to a performance target or baseline (e.g., percent of customers with utility service [24], [25] or satisfied demand [26], [27]). A resilience curve is defined by the progression of a specific performance measure over the scenario duration.
A summary metric quantifies a resilience curve with a singular value. Metrics vary in their formulation and focusexamples include failure or interruption duration [28], [29], recovery rate [30], [31], residual performance [32], [33], and indices with multiple considerations [34]. Summary metrics are notionally selected as a reflection of stakeholder goals (e.g., failure rate for ''graceful degradation'' capability [24]). [57]. He and Yuan highlighted that water shortages to critical services-e.g. hospitals-cause greater societal loss than shortages to residential customers [58]. Although not generally described as such, performance thresholds indicate nonlinear variation between a measure's constituent states. With examples like maximum acceptable loss [14], [59] and minimum performance boundary [60], [61], stakeholders implicitly assign different value to performance units above and below the threshold.
In some cases, spatial variation can be resolved by applying static weighting to constituent states when defining the performance measure (e.g., linking gas, electric, and water demand nodes to social vulnerability indices [62]). But static weighting is insufficient to reflect temporal variation in stakeholder value. Considering road network restoration, Ulusan and Ergun incorporated a time-decaying ''benefit function'' to their cumulative performance objective function [63]. Cimellaro et al. proposed calculating pre-and post-disruption cumulative performance separately, to be combined with weighting factors [40]. Ouyang et al. proposed sampling the resilience curve at stakeholder-specified milestones, each with its own weighting [64]. Tuzun Aksu and Ozdamar applied weighting by path ''earliness'' to reflect variation in their restoration goals [65]. Moslehi and Reddy applied time-varying weighting for heating, cooling, and power systems reflecting the season and time of day [35].
Despite these examples, the impact of time-varying stakeholder value on resilience assessments is generally underexplored. The dynamics of market pricing (e.g., traffic congestion pricing [73], price elasticity of water demand [74], electrical demand response [75]) might be used to represent stakeholder value, but such concepts have not extended broadly to infrastructure resilience literature. For example, some electrical analyses seek to minimize load shedding cost, but incorporate a time-invariant conversion parameter [76]- [78]. Beyond market mechanisms, establishing values for time-varying weighting is non-trivial. Weighting schemes may be estimated from historical data. Cimellaro et al. applied time-series cross-correlation functions to establish regional resilience indices for power, water, and gas over distinct recovery stages [79], [80]. Dueñas-Osorio and Kwasinski used a similar method analyzing the 2010 Chilean earthquake [81]. However, without historical data, weighting values may be outside the scope of a model or analysis. He and Yuan proposed nodal weighting factor, but assigned equal values in their case study [58]. Liu et al. proposed a similar approach, while also leaving weighting unexplored in their simulation [82]. This manuscript does not attempt to resolve challenges in defining time-varying value-instead, it seeks to explore how such variation could impact the subsequent analysis and design.
By exploring the impact of time-varying value, this manuscript contributes to two challenges in infrastructure resilience analysis: system boundaries and model fidelity. As conceptually illustrated in Fig. 2, resilience analyses are often focused on how a hazard disrupts the system of interest (highlighted in red). The ''system of interest'' might extend to coupled or interdependent infrastructure (e.g., power, water, and telecommunications [83]), but there is always a system boundary. Yet infrastructure systems exist to provide services-their very purpose establishes relationships with the environment. Infrastructure resilience modelers are thus faced with the challenge of balancing scope and fidelity. In many cases, assumptions at the system boundary are adopted for modeling simplicity, but acknowledged as potentially relevant (e.g., post-disaster water [58] electrical [84], or traffic [23] demand). Generally, assumptions are implemented when they expand the model or analysis from one system to another (e.g., from the electrical system to the behavior of customers connected to the electrical system). But the context for a resilience assessment-specifically, the value stakeholders place on infrastructure performanceis often derived from the environment outside the system of interest.
Returning to Fig. 2, this manuscript focuses on the interaction between infrastructure performance (system states) and time-varying stakeholder valuation of that performance (context). This perspective does not exist within the current literature-no other work deliberately examines the impact of time-varying weighting on quantitative resilience assessments. Rather than rely on specific mechanisms linking performance and stakeholder value, this manuscript adopts the expectation that both will change in time. Generic performance and value functions are aligned by a stochastic time offset. The product of these two functions provides a holistic resilience summary metric. This holistic metric is compared to the result obtained by considering the infrastructure performance alone (i.e., the naïve expectation of system resilience). Together, this exploration seeks to establish the need and potential applicability of time-varying stakeholder value within quantitative resilience assessments.
This manuscript consists of four subsequent sections. Section II outlines the manuscript's materials and methods. This defines the forms for infrastructure performance, value weighting, and system resilience. Section III applies analytical and numerical analysis to examine specific forms of performance, value weighting, and the distribution of their offset (i.e., correlation in time). Section IV synthesizes these results to considerations for resilience assessment. This includes tradeoffs and resilience improvement opportunities, proposed hazard categories, and general insights. Section V concludes the manuscript with six categories of future work.

II. MATERIALS AND METHODS
This section describes the structure with which time-varying value will be explored. The first subsection defines conventions for infrastructure performance, stakeholder value, and system resilience with notation, assumptions, and constraints. The second subsection outlines the analytical and numerical approaches implemented to examine their relationships.
A. DEFINING INFRASTRUCTURE PERFORMANCE, STAKEHOLDER VALUE, AND SYSTEM RESILIENCE This subsection defines the notation, assumptions, and constraints for key concepts in this manuscript. Fig. 3 illustrates key concepts through an example: the performance of a system in a given resilience scenario evolves according to a known and fixed performance function p (t). The stakeholders' value of this performance varies according to a known value-weighting function v (t) that is offset from p (t) according to τ . Three different realizations of v (t − τ ) and associated weighted impact are presented in Fig. 3(a)-(c). Each pair of p (t) , v (t − τ ) curves can be summarized by a function J (τ ), the percent satisfied value. Fig. 3(d) shows how this summary metric changes as the offset τ is swept across the time domain of interest. Each of these elements is described in detail below, and notation is summarized in Table 1.
Infrastructure performance, p (t), is the evolution of a performance measure for a generic infrastructure system of FIGURE 2. Infrastructure resilience assessments commonly model a hazard's impact on the system of interest (highlighted in red). Analyses are often bound to minimize or neglect relationships between the hazard, environment, and interdependent systems-all of which provide context to the assessment. This manuscript considers how resilience assessments can be affected by variation in both infrastructure performance, p(t ), and context via value weighting, v (t ). When specific forms of p(t ) and v (t ) are described independently, temporal alignment is described by time offset, τ .
interest. Consistent with common practice and without loss of generality, the performance measure is presented as a normalized value such that 0 ≤ p (t) ≤ 1. Within this manuscript, infrastructure performance is a general concept that can be applied to any system with a well-defined baseline or nominal value for normalization.
Cumulative infrastructure performance,Ĵ , is the performance-based assessment of system resilience. Its compliment is cumulative infrastructure disruption,L. Both summary metrics are defined entirely by p (t): Because performance and time are both normalized, each metric has the rangeĴ ,L ∈ [0, 1]. These metrics are calculated without any context from the surrounding environment (i.e., no reference to the value stakeholders place on performance at any specific time). As such, both are denoted estimates andĴ is described as the naïve resilience metric. To expand beyond infrastructure performance-focused resilience assessment, this manuscript introduces timevarying value weighting, v (t). Whereas p (t) quantifies the infrastructure system's performance, v (t) quantifies the relative value stakeholders place on performance at each point in time. Both functions share the same control interval. If implemented, analyses may attempt to quantify v (t) directly or to incorporate a reasonable proxy-for example, Zhang et al. quantified power and water system performance with functional nodes then weighted by daily changes in demand [90]. Practically, p (t) and v (t) may not be independent. As in Fig. 2, performance and value might respond to shared system dynamics or hazard exposure. However, this manuscript considers the general impact of value weighting, independent of its source or mechanism.
The ultimate goal of v (t) is to inform a holistic resilience metric. Value weighting can be applied to either performance, p (t), or its deviation from the target level, 1−p (t). The latter provides a convenient intermediate step with time-varying weighted impact, (t): Fig. 3(b) illustrates a value weighting function (with zero time offset, to be discussed shortly) along with infrastructure performance. The resulting weighted impact is shown to vary in time due to both performance degradation and changes to stakeholder value.
The integral of (t) provides the summary metric percent value lost, L. This is highlighted and labeled with (t) in Fig.  3(b). Its complement is percent value satisfied, J, which may also be calculated directly from p (t) and v (t): LikeĴ andL, both metrics have the range J , L ∈ [0, 1]. This manuscript adopts percent value satisfied, J , as the holistic resilience metric-that is, stakeholders are primarily interested in increasing J .
Equation (5) is used to establish two constraints (C1 and C2) on the abstracted value weighting function. (C1) Value weighting is assumed to be non-negative. Negative values would provide times at which increased infrastructure performance decreases the assessed system resilience. This is rejected as inconsistent with the purpose of a performance measure. If, within a practical analysis, negative weighting is necessary to reflect stakeholders' value perception, then a new performance measure should be selected. (C2) The value weighting function is constrained such that the resulting summary metric adheres to the range J ∈ [0, 1]. The extremes of this range can be interpreted as ''the system could not have performed worse'' (J = 0) and ''the system could not have performed better'' (J = 1). Independent of v(t), these edge cases occur when p (t) = 0∀t ∈ [0, 1] and p (t) = 1∀t ∈ [0, 1], respectively. The latter provides J = 1 0 1 × v (t) dt = 1. Together these constraints demand: If raw units of value are known (e.g., conversion to currency) and described by (t), then value weighting adheres to (6) through v (t) = (t) / 1 0 (t) dt. Considered another way, the constraints (6) serve as value weighting normalization across the control interval. If v (t) < 1 at any point, then there must be corresponding time at which v (t) > 1.
While v (t) is non-negative, there is no upper bound. The Dirac delta function, v (t) = δ(t − t v ), is an extremebut valid-value weighing function (discussed as edge-case scenario (S2) in the next section). Within this manuscript, it is assumed that the control time duration is sufficiently long to encompass an appropriate range of stakeholder value; otherwise, comparisons between scenarios may be invalid.
Within this normalized form of v (t), values are only meaningful relative to one another. For example, in Fig. 3 is shown to vary between v (0) = 0.67 and v (0.5) = 4.0. Presented alone, neither point provides particular insight into how stakeholders value the system. But presented together, one can conclude that stakeholders value infrastructure performance at t = 0.5 six times more than at t = 0. While convenient for analysis within this manuscript, this approach may also be useful for stakeholder engagement and system modeling. Alternatively, currency provides a raw unit of ''value'' [37], [91], [92] but such a conversion may be difficult or distasteful for some systems-such as transportation or hospitals. To that end, stakeholders may find it useful to express time-varying value in relative terms.
For a given scenario, it is expected that both p (t) and v (t) reflect emergent system behavior. Both may be anticipated or described with causal mechanisms, and those mechanisms may be interdependent. However, such mechanisms are beyond the scope of this manuscript.
Referencing Fig. 2, this manuscript considers how the summary metric J (and its relation to the naïve metricĴ ) is affected by variations in v (t) and stochasticity of the time offset, τ , between v (t) and p (t). To facilitate analytical analysis to this end, this manuscript represents p (t) and v (t) as periodic functions, both with a period that matches the control duration. This assumption of periodicity is supported by its common use in signal processing, i.e., assuming a signal is periodic when computing its discrete time Fourier transform (DFT) [93]. Practically, an analysis's control interval would be selected in consideration of both p (t) and v (t). Inconsistencies caused by the periodicity assumption can be reduced by selecting a sufficiently long control interval (e.g., zeropadding, as is done with DFTs). With v (t) periodic, the timevarying weighted impact can be expressed with parameter τ : Fig. 3(a)-(c) illustrate the same p (t) and v (t) functions with three distinct time-offset values, each providing a unique weighted impact, (t; τ ). With p (t) and v (t) periodic over t ∈ [0, 1], the summary metrics can be written in terms of time offset, τ , as percent value lost, L (τ ), and percent value satisfied, J (τ ): For a given p (t) and v (t), the time offset can be considered a random variable over the domain τ ∈ [−1/2, 1/2] with probability density function (PDF) f T (τ ). This PDF represents temporal correlation of p (t) and v (t). Percent value satisfied then becomes an output random variable, described by f J (J ). Descriptive statistics can be derived from these distributions.
In summary, the performance of an infrastructure system within a resilience scenario is measured by infrastructure performance, p (t). The naïve resilience metric cumulative infrastructure performance,Ĵ , is determined solely by p (t). Stakeholders may not value infrastructure performance equally at all times, so value weighting, v (t), is defined. For a given scenario, functions p (t) and v (t) determine the holistic resilience metric percent value satisfied, J. Given specific forms of p (t) and v (t), the temporal alignment of infrastructure disruption and stakeholder value is described by stochastic time offset, τ , with the PDF f T (τ ). The PDF f T (τ ) describes the temporal correlation between p (t) and v (t). Together, these elements provide a stochastic output J (τ ) with the PDF f J J J (J). Descriptive statistics from this distribution can be compared to the naïve metricĴ .

B. ANALYSIS METHODOLOGY
This manuscript specifically examines relationships between the naïve estimatorĴ and the more comprehensive representation of system resilience J . Within an analysis, the former relies entirely on infrastructure states, whereas the latter requires a representation of stakeholder values, often driven from context external to the system of interest. From the perspective of modeling complexity, focusing solely onĴ simplifies the scope of an analysis-indeed, it is widely adopted across the literature. Using generalizable analytical and numerical methods, this manuscript explores how the value weighting function v (t) and temporal correlation f T (τ ) can impact the suitability ofĴ as an estimator of J .
Within this manuscript, both p (t) and v (t) are treated as fixed periodic functions for a given scenario. These functions are independent except for the time offset τ applied to v (t − τ ). Fig. 3 illustrates this relationship, with three examples of τ = {−0.3, 0, +0.3} in (a)-(c), respectively. The offset PDF f T (τ ) conceptually represents the likelihood of alignments between p (t) and v (t): and v (t) are completely out of phase.
Infrastructure performance, p (t), is characterized by a disruption followed by recovery, e.g., ''u''-shaped. When v (t) is ''n''-shaped (e.g., a value surge) the offset τ describes the expected correlation between the infrastructure disruption and the change in stakeholder value. For various forms of p (t) and v (t), this manuscript considers time offsets as an input random variable, τ ∈ T, and the resulting summary metric percent value satisfied as an output random variable, J ∈ J .
First, this manuscript considers edge-case forms of v (t) and f T (τ ) that provide convenient, closed-form descriptive statistics for f J (J ). Next, trigonometric forms are described for p (t) and v (t) that provide an analytical formulation of f J (J ) when f T (τ ) is symmetrical. While these contrived functions may not extend to realistic performance and value functions, the results are used to explore the impact of changes in p (t), v (t), and f T (τ ). Analytical results are validated through numerical methods-specifically, MAT-LAB discrete time computation. Finally, numerical methods are applied to the ''resilience triangle'' paradigm [94], for which closed forms are unavailable. Again, analysis broadly explores the impact of changes in p (t), v (t), and f T (τ ). Throughout, this manuscript considers practical consequences of describing context in terms of a value weighting function.

III. RESULTS
This section explores general relationships between three elements: infrastructure performance, p (t); value weighting, v (t); and the temporal correlation between p (t) and v (t), as described by the PDF of their time offset, f T (τ ). The time offset, τ ∈ T, is treated as an input random variable. Percent value satisfied, J ∈ J , is treated as an output random variable (i.e., the holistic resilience metric). The distribution f J (J ) is specifically compared to the infrastructure-performancebased estimateĴ (i.e., the naïve resilience metric).
Descriptive statistics can be derived from a specific PDF f J (J ). Alternatively, the expected value E [J ] can be determined from p (t), v (t), and f T (τ ) using the law of the unconscious statistician; this also provides the variance Var (J ). From (9), the expected value and variance can be written generally as: These forms provide a starting point for analytical exploration.

A. EDGE CASES FOR VALUE WEIGHTING AND TIMING
In general, a closed-form PDF f J (J ) cannot be derived for arbitrary closed-form definitions of p (t), v (t), and f T (τ ). However, this subsection explores four edge-case scenarios that yield closed-form statistics for J . Each scenario can be described with a narrative: (S1) Stakeholders value performance equally at all times. (S2) Stakeholders only value performance at a single point in time and are not concerned with the performance at any other time. (S3) Stakeholders assign time-varying value to infrastructure performance, but value is entirely uncorrelated with infrastructure disruption. (S4) Stakeholders assign time-varying value to infrastructure performance, but the alignment between infrastructure performance and value is known and fixed. Each narrative provides a specific definition of either v (t) or f T (τ ). Together, these cases encompass the extremes for two dimensions illustrated in Fig. 4: variation in timing and variation in value.
Scenario (S1) is interpreted as constant value weighting. Adhering to constraints (6), the value weighting function becomes v (t) = 1. This definition simplifies the expected value (10) to E [J ] =Ĵ and the variance (11) to Var (J ) = 0. This trivial result validates the formulation of J and provides a baseline: when stakeholders value performance equally at all times, the performance-based metricĴ also quantifies holistic system resilience. The is the scenario assumed in the majority of the infrastructure resilience curve literature.
Scenario (S2) describes the extreme alternative to (S1). Within the constraints (6), value weighting is an impulse where t v is the time at which stakeholders value performance. Value weighting is zero at all other times: v (t = t v ) = 0. In this case, percent value satisfied (9) becomes J (τ ) = p (t v + τ ). In the general case, neither the expected value (10) nor the variance (11) has a convenient closed form-both are dependent on f T (τ ). In this case, comprehensive resilience assessment cannot be evaluated with p (t) alone. The time correlation, described by f T (τ ), is essential to resilience quantification.
Additionally, (S2) motivates a general theorem on the range of summary metric J , regardless of p (t), v (t), and f T (τ ): Thus, Theorem 1 bounds J by (12) independent of v or τ .
Scenario (S3) provides an edge case for the variation in timing between infrastructure performance and stakeholder value: the functions are completely uncorrelated. This is interpreted as a uniformly distributed time offset: f T (τ ) = 1. In this case, the expected value (10) simplifies to E [J ] =Ĵ for any value function v (t). However, the variance (11) is non-zero (unless (S1) also applies). For analyses that seek expected system resilience, this is a useful result: timevarying stakeholder value can be ignored or left undescribed, if v (t) and p (t) are known to be independent and uncorrelated in time. However, if an analysis seeks the full distribution of metric values (e.g., extremes or quartiles), then it must specify and incorporate v (t).
The results for (S1) and (S3) provide a general theorem: and v (t) ≥ 0 with periodicity described by period t ∈ [0, 1] and with the constraint Scenario (S4) provides the extreme alternative to (S3). For a given p (t) and v (t), the offset is fixed. This is interpreted as an impulse time offset f T (τ ) = δ (τ − τ s ), where τ s is the fixed offset. Complementing (S3), this scenario simplifies variance (11) to Var (J ) = 0 but the expected value (10) has no generalizable relationship withĴ (unless (S1) also applies). In this case, a performance-based resilience assessment is certainly insufficient, and the holistic assessment requires v (t) and τ s .
Results for edge cases (S1)-(S4) are illustrated in Fig. 4. In reality, most practical systems will lie within the extremes. This provides the manuscript's first general insight: the accuracy of naïve estimateĴ depends on variation in both value and timing of that value. The following two subsections will consider specific forms of p (t), v (t), and f T (τ ).

B. TRIGONOMETRIC PERFORMANCE AND VALUE WEIGHTING FUNCTIONS
Moving beyond generic forms, this subsection considers forms of v (t) and p (t) defined using trigonometric functions. These simplified forms provide closed form solutions to f J (J ), which highlights considerations beyond those in the previous subsection. Analytical results are verified and illustrated through numerical simulation.
The trigonometric performance function (13) is a cosine function with a period of 1.0, a maximum of 1.0, and a minimum of p d ∈ [0, 1], and endpoint values v (0) = v (1) = 1. This represents a system which gradually undergoes degraded performance during a resilience scenario, and gradually recovers to full performance. Fig. 5(a) illustrates a trigonometric performance function with p d = 0.2.
From this form, the naïve summary metric can be written in terms of p d :Ĵ The value weighting function (15) is similarly a cosine function with endpoint values v (0) = v (1) = v 0 and midpoint value v 1 . This function could represent, for example, smooth diurnal variation in how stakeholders value infrastructure performance (if the period of both v and p is normalized to one day).  2]. Thus, the trigonometric value weighting function can be written in terms of v 1 : Substituting these trigonometric forms into (9) and rearranging provides the summary metric percent value satisfied J as a function of the time offset, τ , naïve estimate,Ĵ , and a scaling factor, γ :   Fig. 5(a) combines with v A (t) and v B (t) in Fig. 5(b) to produce distinct J A (τ ) and J B (τ ) functions. The difference in v 1 values affect the scaling factor γ and the subsequent range of J (τ ). While the ranges of both J A (τ ) and J B (τ ) are centered around their similar naïve estimateĴ = 0.60, J B (τ ) has a higher maximum and a lower minimum value. This is a direct consequence of larger variation in value weighting function v B (t). Together, (16)- (17) and Fig. 5(d) motivate a general intuition for any form of p (t) or v (t): the more a value weighting function's magnitude varies, the more consequential its timing is on the resilience assessment.
Insight from the form of J (τ ) in (16) can be expanded by considering input and output PDFs. The time offset is treated as a bounded input random variable, τ ∈ T, and percent value satisfied is treated as an output random variable, J ∈ J . With trigonometrical forms of p (t) and v (t), the expected value (10) and variance (11) for f T (τ ) become: Var While (18) and (19) are not particularly insightful, f J (J ) provides a more convenient form when f T (τ ) is symmetrical over τ ∈ [−1/2, 1/2]. From (16), J (τ ) is symmetrical over τ ∈ [−1/2, 1/2]. By considering the representative domain τ ∈ [0, 1/2] and applying the Method of Transformations, the output PDF f J (J ) can be derived. For trigonometric forms of p (t) and v (t) with a symmetrical f T (τ ) the PDF for percent value satisfied becomes: where The next two subsections consider specific f T (τ ) forms: uniform and selected beta distributions. Both can be fully defined with symmetry over τ ∈ [−1/2, 1/2], providing the means to expand the closed form described by (20).

1) UNIFORM DISTRIBUTION TIME OFFSET
The PDF f J (J ) in (20) can be simplified when the time offset is uniformly distributed: T ∼ U (−1/2, 1/2) for f T (τ ) = 1. As discussed in edge-case scenario (S3), this represents completely uncorrelated timing of p (t) and v (t). The expected value (18) becomes E (J ) =Ĵ , which is consistent with results for (S3). The variance (19) becomes: The PDF for percent value satisfied (20) becomes: Equations (23) and (24) provide some intuition on relationships between f J (J ) and the elements of uncorrelated, trigonometric forms of p (t) and v (t). The variance increases quadratically as either the disrupted performance level, p d , decreases or as value level v 1 deviates from unity. Alternatively, the variance approaches zero as the disrupted performance level increases and value weighting approaches constant.
The closed form PDF (24) is confirmed through analytical and numerical analysis. Because f J (J ) is symmetrical around E[J ], its mean value is also its median. However, E [J ] does not represent the modal value. Instead, f J (J ) exhibits vertically asymptotic increases at the extremes of their ranges. This may contradict stakeholder's intuition where it is common to assume a normal distribution. In which case, the assumed normal distribution with E [J ] =Ĵ would underestimate the probability density at the actual extremes of J . For stakeholders interested in worst-case scenarios, quartiles of J may be more conveniently appropriate than an (incorrectly) assumed normal distribution. As illustrated in Fig. 5(e), assessments of System B1 are more likely to be outside the range of System A than within-this is entirely due to variation between v A (t) and v B (t), even though disruption and value are uncorrelated. Together, these observations expand upon the previous subsection with a general intuition: even when infrastructure disruption and stakeholder value are uncorrelated, the greater the infrastructure disruption and the greater the variation in value, the lessĴ should be assumed as representative.

2) BETA DISTRIBUTION TIME OFFSET
The PDF f J (J ) in (20) can also be simplified with a constrained and shifted beta distribution for f T (τ ). This describes a form of time correlation (positive or negative) between trigonometric p (t) and v (t). In its general form, the beta distribution X ∼ Beta (α, β) is defined over x ∈ [0, 1].
where B is the normalization constant for 1 0 f X (x) dx = 1.

TABLE 2.
Descriptive statistics for f J J in Fig. 5 and Fig. 6.
The beta distribution is symmetrical when α = β. Constraining to symmetrical forms, shifting the domain from x ∈ [0, 1] to τ ∈ [−1/2, 1/2], and rearranging (25) provides the time offset PDF: Returning to (20) provides the PDF for percent value satisfied for trigonometric p (t) and v (t) with symmetrical beta distribution time offset: This distribution provides three regimes for the time offset. As illustrated by System B2 in Fig. 6(c), when 0 < α < 1, the mode of time offset is τ = ±1/2 As illustrated by System B3 in Fig. 6(c), When α > 1, the mode of the time offset is τ = 0. The third regime, α = 1, becomes the uniform distribution previously described by System B1 in Fig. 5(c).
The formulation in (28) is useful when p (t) and v (t) are defined such that their extreme values are alignedas in Fig. 6(a)-(b). Within Fig. 6, System B2 illustrates a case in which the infrastructure disruption is generally uncorrelated with stakeholder value. For real-world systems, this could represent preventative maintenance (e.g., generator load testing) or the ability to shift stakeholder value relative to anticipated hazards (e.g., hurricane evacuation). In contrast, System B3 represents the case in which disruption and value are generally correlated. For real-world systems, this could represent targeted attacks that specifically seek high value periods (e.g., cyber exploitation of military systems). Both can be contrasted System B1 in Fig. 5, where the uniform distribution indicates no correlation at all. For real-world systems, this could represent random hazards like component failures.
Unlike System B1 in Fig. 5, System B2 and B3 do not provide symmetrical PDFs. For System B2, f J (J ) skews to higher values; the naïve estimateĴ underestimates the expected assessment. For System B3, f J (J ) skews to lower values; the naïve estimateĴ overestimates the expected assessment. For both, E [J ] and Var (J ) provide even less VOLUME 9, 2021 context than they did for the uniform distribution in Fig. 5-at a minimum, quartiles are needed to convey relevant descriptive statistics. Table 2 provides the descriptive statistics for  Systems A, B1, B2, and B3. This example (albeit contrived to allow for a closed form solution) provides the general intuition: when stakeholder value is correlated with the performance disruption, performance-based assessments are likely to overestimate system resilience. Even without defined system-specific parameters, this insight can inform the scope of resilience analyses and resilience options. Additionally, if v (t) was characterized by a decrease in stakeholder value (i.e., v 1 < 1) then the opposite would be true: time correlation between p (t) and v (t) provides improved resilience assessments.
These trigonometric forms of p (t) and v (t) provided insights and motivation to further investigate value weighting functions. The next subsection extends this approach to a more common form of p (t) since resilience curves do not typically match the trigonometric shapes.

C. TRIANGULAR PERFORMANCE AND VALUE
This subsection considers specific forms of p (t) and v (t) that are ''triangular'' in shape. Resilience curves are often presented as a ''resilience triangle'' [94]. In this form, infrastructure performance suffers an instantaneous drop, immediately followed by linear or approximately linear recovery to the initial performance level [15], [22], [41], [89], [95]- [97]. Such a curve can be defined by the time of disruption, t d , the residual performance, p (t d ) = p d , and restored performance after the disruption duration, t. This provides the infrastructure performance function: This provides the naïve, performance-based summary metric:Ĵ A convenient value function is one with the same shape: steady-state value v ss , instantaneous change at t d to v (t d ) = v d , and linear return to v ss over t. The constraints provided by (6) can be described in terms of t, v ss , and v d . Within this constraint, v d can be greater than or less than the steady-state value v ss (v d = v ss becomes constant value weighting).  Over t, p (t) and v (t) linearly recover to their steady-state values.
When the control interval is sufficiently large such that t < 1/2, these triangle functions provide a somewhat convenient form for percent value satisfied, J (τ ): where Illustrated in Fig. 7(d), this summary metric is symmetrical with two distinct regions. When the absolute time offset exceeds t, percent value satisfied is constant: J ss . Within τ ∈ [− t, t], percent value satisfied is monotonic between J (τ = 0) = J d and J ss . These two values provide the summary metric's range: These can be written in terms of the naïve, performancebased metric cumulative performance disruption,L: Within these forms, v d > v ss results in J d < J ss while v d < v ss results in J d > J ss . The former applies in Fig. 7, providing J ss = 0.991 and J d = 0.451. Although t is only 20% of the control interval, J d is a significant departure from J ss , and it is likely of interest to stakeholders. However, constraints (6) prevent such a stark difference when v d < v ss provides J d > J ss . Together, this provides some general intuition:

for systems in which value weighting has both steady-state and transient components, assessments must consider when the performance disruption both does and does not overlap with the value transient.
Unlike trigonometric functions, the monotonic region of J (τ ) in (33) does not provide a convenient or illustrative probability density function, f J (J ). Instead, observations and general intuition are found through numerical analysis. Fig. 7(c) illustrates two time-offset distributions, f T (τ ). For System C, f T (τ ) is uniformly distributed; this represents an uncorrelated performance disruption and value surge. For System D, f T (τ ) has a truncated normal distribution with µ = 0 and σ = 0.05; this represents an expectation that the performance disruption and value surge are aligned or nearly aligned. In a real-world system, this alignment could represent targeted attacks selected to disrupt systems during critical operations. Table 3 provides the descriptive statistics for percent value satisfied, f J (J ). Other than their common range, the systems' results should be interpreted very differently. As anticipated with a uniform distribution time offset, the expected value for System C matches the naïve, performance-based estimate, J . More notably, the maximum J ss dominates the results with Pr [J = J ss ] > 0.50. This result is encouraging for stakeholders concerned with uncorrelated failures over a system's lifecycle: the performance-based assessment matches the expected metric and slightly underestimates the median and mode. In such cases,Ĵ is a reasonable estimation. However, System D highlights how this estimate fails for likely correlation between p (t) and v (t).Ĵ greatly overestimates the expected, median, and modal assessment; it is no longer a reasonable estimation. In such a case, resilience analyses must consider potential variation in value and its correlation with the infrastructure disruption. Together, these numerical results provide another general insight: for systems in which value weighting has both steady-state and transient components, temporal correlation between performance and value can dominate the assessment.

IV. DISCUSSION
The previous section explored relationships between p (t), v (t, τ ), f T (τ ),Ĵ , and f J (J ), for both general and specific forms. This section distills these relationships into three categories of practical consequences on infrastructure resilience assessments: tradeoffs and improvement opportunities; hazard categories for analysis; and key takeaways from earlier results.

A. TRADEOFFS AND IMPROVEMENT OPPORTUNITIES
Two new opportunities to improve (or worsen) a system's resilience become apparent when value weighting functions are added to a resilience assessment: modification of how much the performance is valued, and the temporal correlation between increases in value and decreases in performance. When stakeholders are considering between infrastructure configurations and operating procedures, they should not view performance and value as independent nor immutable.
Improvements to one dimension may affect the others, especially in the long term. In the 1930s, electricity was still a novelty and facilities could function without it; by the 1960s, facilities were built with the expectation of electricityoutages had greater consequence [98]. Such effects may also be more immediate: if a newly installed backup system is tested less frequently than its predecessor, it may thus be more likely to fail during real-world events. These examples highlight a general caution: decisions that improveĴ may, over time, motivate behavior changes for less favorable v (t) and f T (τ ) functions.
To address this, resilience analyses should consider correlated changes across p (t), v (t), and f T (τ ). Fig. 8 plots and Table 4 summarizes an illustrative example in which each varies across three cases: teal, orange, and violet in the figure. TEAL: a moderate infrastructure performance loss with a moderate expectation that peak value is anti-correlated with the disruption (e.g., highways during an overnight snowstorm). ORANGE: a more subdued infrastructure performance loss with little variation in value and a high degree of uncertainty of the value-performance correlation (e.g., drinking water systems during extreme floods). VIOLET: a short but extreme performance loss with a high degree of certainty that a peak in value will occur during a performance trough (e.g., a hospital during a hurricane).
When compared using the naïve performance-based assessmentĴ , the violet case appears most desirable; however, this configuration yields the lowest expected valued performance E[J ]. In contrast, the teal case provides the best E [J ], despite the worstĴ . However, the teal combination also provides the worst min (J ) extreme value. The orange case does best by that metric, while also providing the smallest range of possible values.
There are many reasons for which to consider combinations of p (t), v (t), and f T (τ ). Each combination could reflect a category of scenarios. Each combination could reflect investment or planning decisions (e.g., physical configurations, operational protocols). Regardless of the specific decision-space, the combination of p (t), v (t), and f T (τ ) is shown to be relevant to the assessment's recommendation. Note this contrived example assumes normalization of value weighting functions is consistent across combinations; practical applications must carefully consider changes in absolute levels of value.
Comparison of configurations is best achieved through their respective f J (J ) PDFs. Some existing infrastructure resilience literature presents results in this fashion, illustrating the distribution of performance-based summary metrics from Monte Carlo analyses [20], [23], [99]- [101]. However, FIGURE 8. Illustration of tradeoffs between p t , v t , and f T (τ ). Case 1 (teal) has a moderate infrastructure performance disruption but well controlled value; its expected resilience is high, but its minimum is low. Case 2 (orange) has moderate infrastructure performance with steady, uncorrelated value; its expected resilience is the naïve estimate with a small range. Case 3 (violet) has the best naïve estimate, but high value weighting is correlated with the disruption; its expected resilience is the lowest.
existing work considers only variation in p, not v or f T (τ ). If two of these three curves can be fixed in a resilience analysis and/or design, the influence of the third can be effectively studied. Performance p is the focus of extensive literature, but the other two are generally underexplored. This provides new opportunities. Improving resilience by controlling v (t) may include both minimizing surges in value and providing periods of low value. Such options would be influenced by corresponding changes in f T (τ ): periods of high value are acceptable if they can be controlled to avoid infrastructure performance disruptions (i.e., through anticipation or delay). This manuscript illustrates that p (t) alone is often insufficient for resilience assessments-at the same time, resilience can be improved without changing p (t).

B. HAZARD CATEGORIES
Hazard categories are common in the infrastructure resilience and risk literature, often derived from the type of impact on the system. In network science, node disruption is often determined as either random or targeted (generally based on topological properties) [1], [102]- [104]. Others incorporate an additional category for spatial considerations [105]- [107]. Alternatively, hazard categories may be derived from the initiating source. The Department of Homeland Security distinguishes accidental for ''negligence, error, or unintended failure'', intentional for ''deliberate action'', and natural for ''meteorological, environmental, or geological phenomenon'' [108]. This manuscript identifies new criteria with which to categorize hazards: the probabilistic alignment between an infrastructure disruption and stakeholder value.
As highlighted with System B across Fig. 5 and Fig. 6, increased stakeholder value may be either: uncorrelated with the disruption, correlated away from the disruption, or correlated with the disruption. These options can be mapped to general hazard categories: random, foreseeable, and adversarial. For random hazards, there is no common mechanism between the disruption and stakeholder value (e.g., lifecycle component failure). Foreseeable hazards can be anticipated, allowing stakeholders to shift value to before or after the disruption (e.g., evacuation and hunkering down during a weather event). Alternatively, adversarial hazards are those in which the timing targets stakeholder value as much as possible (e.g., terrorism). In general, each category provides its own skew on the distribution f J (J ). This is informed by variation within v (t)-as described edge-case scenario (S1), constant value weighting eliminates any distinction. Table 5 summarizes each of the categories with practical impacts in mind.
These categories are informative, not definitive, but they can guide the scope of resilience analyses. A system exposed to all three categories should not focus all analyses on a single category of hazards. Analyses seeking to understand comprehensive resilience might best consider one hazard from each category before multiple from the same category. As a specific caution, stakeholder intuition developed for random and foreseeable hazards should not be extended to adversarial hazards without careful consideration. This is especially the case for systems with high variation in value.
Finally, multi-hazard analyses should consider a combination of categories. Existing work in this area tends to consider compound natural hazards, like combinations of flooding, earthquakes, and hurricanes [109]. Combining hazards across these categories provides additional variation which may be underexplored. For example, a compound of foreseeable and adversarial attacks could be framed as an ''opportunity attack'' on critical infrastructure. Establishing the hazard conditions for resilience analyses in non-trivial, and these categories can inform the scope of that process.

C. GENERAL INSIGHTS
Analytical and numerical results provided five general insights. Each of these key takeaways can be used to calibrate a mental model for approaching resilience assessments with specific forms of p (t), v (t), and f T (τ ).
One: the accuracy of naïve estimateĴ depends on variation in both value and timing of that value.
This insight was the starting point that motivated exploration of both v (t) and f T (τ ). The performance-based esti-mateĴ is only comprehensive when stakeholder value is constant. This is often not the case, and it should be justified if assumed within an analysis. The most extreme alternative to constant value is an impulse function, for which performance is only value at a single point in time. On a similar spectrum is the time offset distribution: ranging from no correlation (a uniform distribution) to a fixed offset (impulse distribution).
No correlation providesĴ = E [J ] which may be useful for some analyses; however, the variance is non-zero. Alternatively, a fixed offset provides zero variance, if E[J ] can be established with known p (t) and v (t) functions. Real systems exist somewhere between both sets of extremes; thus, resilience assessments should directly address the possibility of non-constant value and correlation between the disruption and value.
Two: even when infrastructure disruption and stakeholder value are uncorrelated, the greater the infrastructure disruption and the greater the variation in value, the lessĴ should be assumed as representative.
When the time offset is uniformly distributed, E [J ] =Ĵ . This special case is appealing, as many analyses seek the expected value. However, stakeholders may be interested in the range of possible values. For the trigonometric forms in Fig. 5, the mode of the distribution lies at the extreme high or low J , far from the median and mean. While this observation does not extend to all forms of p (t) and v (t), it serves as a warning to not neglect the extremes within an analysis. The range between those extremes will increase with the size of the infrastructure disruption and the variation in value. Consequently, for a small disruption, it may be reasonable for an analysis to extendĴ as representative of f J (J ), but that presumption may no longer hold for larger disruptions. Considering variation in value, an analysis may be initially scoped by informally asking stakeholders ''During possible scenarios, are there significant changes in the relative value the infrastructure system provides?'' If the answer is ''yes'' then stakeholders may be interested in the range of possible values, even though the expected value is known.
Three: when stakeholder value is correlated with the performance disruption, performance-based assessments are likely to overestimate system resilience.
Expanding beyond uniformly distributed time offsets, the correlation between performance disruption and value was show to skew the resulting distribution f J (J ). For systems in which value is correlated with non-disrupted performance, this means the naïve estimateĴ underestimates comprehensive system resilience J . Despite the inaccuracy, this is generally a favorable result: stakeholders will be less disrupted than expected. However, when value is correlated with the disrupted performance,Ĵ will overestimate comprehensive system resilience. This error should be specifically avoided-overestimating resilience can have a compounding effect in which unsupported overconfidence leads to even greater impact. Resilience analysis should directly confront this by asking stakeholders: ''Do you anticipate valuing infrastructure more during the disruption?'' If the answer is ''yes'' then the analysis should incorporate a representation of v (t) and f T (τ ).
Four: for systems in which value weighting has both steady-state and transient components, assessments must consider when the performance disruption both does and does not overlap with the value transient.
Generally, an analysis's control interval should be ''suitability long'' [89]. When looking at a range of hazards or system behaviors, this may mean that the infrastructure disruption is a short portion of the overall period. Similarly, the value weighting function might be characterized by a steady-state level with a short duration of change-either increased or decreased relative value. Within an analysis, it may be appealing to treat the steady-state value weighting as a constant value. But this would be misleading, for the same reason performance-based resilience assessments are not determined solely by steady-state infrastructure performance. Even if the time offset PDF is unknown, the range of J values can be bound by considering both the steady-state (when the disruption occurs during steady-state value) and the extreme (when the disruption occurs during the period of transient value).
Five: for systems in which value weighting has both steady-state and transient components, temporal correlation between performance and value can dominate the assessment.
Expanding upon the previous insight, the infrequency of transient value cannot alone be used to exclude the consideration of value-weighted metrics. There are many potential mechanisms which may drive performance disruptions and value to be aligned, and that alignment may overwhelm the distribution f J (J ). This final insight should be considered the starting point for scoping resilience assessments. To understand the applicability ofĴ and the subsequent scope of the analysis effort, stakeholders should be asked to describe the general variation in valuing infrastructure performance and how that is expected to align with a disruption. Without documentation of such considerations, resilience assessments cannot confidently rely on the performancebased, naïve estimateĴ .

V. CONCLUSION
This manuscript provides both a warning and a call to action. As a warning: time-varying value can significantly affect the results of quantitative resilience assessments, potentially leading to significant underestimates of resilience. As a call to action: the concept of time-varying value provides new opportunities to describe and investigate how infrastructure systems support stakeholders during resilience scenarios. Future work in this area includes six focus areas.

A. VALUE WEIGHTING AS AN INTERFACE DEFINITION
As illustrated in Fig. 2, establishing system boundaries is a challenge for comprehensive resilience assessments. Every system has connected systems that influence its behavior or the context of its performance. ''An essential goal of systems engineering is to achieve a high degree of modularity to make interfaces and interactions as simple as possible'' [110].
Existing resilience assessments of interconnected infrastructure do not generally acknowledge the possibility of time-varying value, and instead model and simultaneously simulate all subsystems (e.g., network representation of coupled power, water, and communication systems [26], [83]). This presents practical organizational challenges, since rarely does a stakeholder have detailed models of interconnected systems. For example, the electric train operator does not have the models nor expertise to fully incorporate the electric grid into their resilience analysis.
However, value weighting functions provide an opportunity for describing succinct interfaces between interconnected systems for resilience analysis. A stakeholder could simply model their own system (where they know the value weighting(s) v (t)) as long as the interconnected system could provide performance curve(s) p (t), and both could agree on correlation curve(s) f T (τ ). Even approximate value functions and correlations may provide improved understanding of resilience (over constant value), while preventing the need for a larger scope of the analysis.

B. ESTABLISHING MODEL BOUNDARIES
In a complement to the previous area, time-varying value can inform when model boundaries should be expanded to include additional elements. Increasing the scope of an analysis comes at a cost in time and effort-this effort may be justified if it provides more accurate analysis and better recommendations. But the impact of an analysis is generally unknown until explored through the analysis itself. Broad consideration of value weighting and correlation with infrastructure disruption can inform if and when boundaries warrant consideration. If stakeholders expect constant value weighting, the model boundaries can be driven by the infrastructure system. If stakeholders expect high variation in value weighting and that value may correlate with the disruption, then the scope may need to expand to include more mechanisms that influence context (and thus value). With stakeholder engagement, these concepts can inform when additional modeling effort is recommended.

C. VALUE PARAMETERIZATION AND TAXONOMY
Research is needed to understanding the factors that influence value weighting functions and, from that, how to generate reasonable functions. This could be done through data analyses, sensors, and stakeholder surveys. Although specifics vary within each scenario, infrastructure resilience curves are often presented with canonical shapes and forms. Value weighting function may or may not have their own set of standard characteristics. Developing a taxonomy of elements (with their typical applicability) would spur the implementation of time-varying value within infrastructure resilience analyses.

D. INCORPORATION WITHIN ENSEMBLES
In this manuscript, resilience curves were presented and considered singularly. However, resilience analyses often generate an ensemble of infrastructure resilience curves (e.g., through Monte Carlo analysis). Each of these curves represents a potential trajectory, and the analysis spans the range of possibilities. Value weighting functions introduce another dimension in which to generate an ensemble. Even with value parameterization and a taxonomy, this provides new research questions. How should an ensemble of p (t) and an ensemble of v (t) be combined while avoiding curses of dimensionality? Should the ensembles be considered independent, with a common f T (τ ) or does each pair have its own f T (τ )? Understanding these questions is a necessary next step.

E. PERFORMANCE AND VALUE DYNAMICS
This manuscript explored time correlation of performance and value weighting functions. However, both functions are described as independent in their actual trajectory. This is consistent with the initial motivation: understanding dynamics between infrastructure states and its context can be prohibitively challenging. Yet this manuscript highlights the importance of incorporating time-varying value in resilience assessments, so it encourages further exploration in this area. For many infrastructure systems and resilience scenarios, the progression of performance and value may be related. In some cases, unmet performance is memoryless and does not influence value (e.g., lighting). In other cases, unmet performance accumulates as future demand and increases the value of performance (e.g., wastewater lift stations). Understanding these dynamics, even as general categories, can further the field of infrastructure resilience. Specifically, systems for which performance degradation and value are correlated have low resilience, but systems in which they feed back to one another may have catastrophic consequences. Solutions in this area may include damping between these infrastructure performance and value weighting functions-akin to demand response for system value.

F. PRACTICAL RESILIENCE RECOMMENDATIONS
With further exploration, the concepts described in this manuscript may lead to practical resilience recommendations, even without additional quantitative analysis. Resilience assessments are based on infrastructure performance, stakeholder value, and time correlation between the two. Expanding beyond infrastructure performance provides multiple questions for consideration. How can stakeholders adjust their value weighting functions? How can they adjust time correlations (e.g., anticipating or delaying value)? What factors allow deliberate control of value functions? This focus area may incorporate related infrastructure concepts. ''Passive survivability''-the ability for a facility to operate with reduced external input [111]-could be interpreted as a method to control value weighting functions.
Finally, this focus area (and the manuscript broadly) highlights an important but overlooked consideration: infrastructure resilience can be improved without changing the trajectory of infrastructure performance. The ability to control stakeholder value and its alignment with disruptions provides additional methods to improve infrastructure system resilience-these options warrant deliberate consideration within analyses and as options for practical recommendations. VOLUME 9, 2021