A Fundamental Storage-Communication Tradeoff for Distributed Computing With Straggling Nodes

Placement delivery arrays for distributed computing (Comp-PDAs) have recently been proposed as a framework to construct universal computing schemes for MapReduce-like systems. In this work, we extend this concept to systems with straggling nodes, i.e., to systems where a subset of the nodes cannot accomplish the assigned map computations in due time. Unlike most previous works that focused on computing linear functions, our results are universal and apply for arbitrary map and reduce functions. Our contributions are as follows. Firstly, we show how to construct a universal coded computing scheme for MapReduce-like systems with straggling nodes from any given Comp-PDA. We also characterize the storage and communication loads of the resulting scheme in terms of the Comp-PDA parameters. Then, we prove an information-theoretic converse bound on the storage-communication (SC) tradeoff achieved by universal computing schemes with straggling nodes. We show that the information-theoretic bound matches the performance achieved by the coded computing schemes with straggling nodes corresponding to the Maddah-Ali and Niesen (MAN) PDAs, i.e., to the Comp-PDAs describing Maddah-Ali and Niesen’s coded caching scheme. Interestingly, the MAN-PDAs are optimal for any number of straggling nodes. This implies that the map phase of optimal coded computing schemes does not need to be adapted to the number of stragglers in the system. We show that the points that lie exactly on the fundamental SC tradeoff cannot be achieved with Comp-PDAs that require smaller number of files than the MAN-PDAs. This is however possible for some of the points that lie close to the SC tradeoff. For these latter points, the decrease in the requested number of files can be exponential in the number of nodes of the system. We also model the total execution time, and numerically show that the active set size should be chosen to balance the duration of the map phase and the durations of the shuffle and reduce phases.


I. INTRODUCTION
Distributed computing has emerged as one of the most important paradigms to speed up large-scale data analysis tasks.One of the most popular programming models is MapReduce [2] which has been used to parallelize computations across distributed computing nodes, e.g., for machine learning tools [3], [4].Consider the task of computing D output functions from N files through K nodes in a MapReduce system.Each output function φ d , (1 ≤ d ≤ D), can be decomposed into • N map functions f d,1 , . . ., f d,N , each depending on exactly one different file; and • a reduce function h d that combines the outputs of the N map functions.
Each node k is responsible for computing a subset of D K output functions through three phases.In the first map phase, a central server stores a subset of files M k at node k, for each k ∈ [K].Each node k then computes all the D intermediate values (IVAs) f 1,n (w n ), . . ., f D,n (w n ) from each of its stored files w n ∈ M k .In the subsequent shuffle phase, it creates a signal from its computed IVAs and sends the signal to all the other nodes.Based on the received exchanged signals and the locally computed IVAs, in the final reduce phase it reconstructs all the IVAs pertaining to its own output functions and calculates the desired outputs.
Recently, Li et al. proposed a scheme named coded distributed computing (CDC) to reduce the communication load in the shuffling phase [5].The idea is to create multicast opportunities by duplicating the files and computing the corresponding map functions at different nodes.It is shown that the CDC scheme achieves the fundamental storagecommunication tradeoff, i.e., it has the lowest communication load for a given storage constraint.This result has been extended in various directions.For example, [6]- [8] account also for the computation load during the map phase; [9] studies the computation resource-allocation problem; [10]- [13] consider wireless (noisy) networks between computation nodes; [14] considers a model where during the shuffle phase each node broadcasts only to a random subset of the nodes.
In this paper, we consider a setup where during the map phase each node takes a random amount of time to compute its desired map functions [15].In this case, instead of waiting for all the nodes to finish the assigned computations, which can cause an intolerable delay, data shuffling starts as soon as any set of Q nodes, Q ∈ [K], terminate their map procedures.The set of Q nodes that first terminate the map procedure are called active nodes, while the remaining K − Q nodes are called straggling nodes or stragglers.The stragglers are not identified prior to the beginning of the map phase and hence the map phase has to be designed without such knowledge.
Distributed computing systems with straggling nodes have mainly been studied in the context of a server-worker framework.In this framework, a central server distributes the raw data to the workers like in the above described map phase, but following this map phase the workers directly communicate their intermediate results to the server, which then produces the final outputs.(Thus, under the server-worker framework, all final outputs are calculated at the server and not at the distributed computing nodes as is the case in MapReduce systems.)The described server-worker framework with stragglers was treated, for example, in [15]- [25] with a focus on highdimensional matrix-by-matrix or matrix-by-vector multiplications and in [26]- [30] with a focus on gradient computing.
Fewer works studied MapReduce systems with straggling nodes (hereafter referred to as straggling systems) which are more relevant for the present article.Specifically, Li et al. [31] proposed to incorporate MDS codes into the CDC scheme to cope with straggler nodes.Their construction however works only when the map functions accomplish matrix-byvector multiplications.Improved constructions were proposed by Zhang et al. by choosing the parameters of MDS code and CDC scheme separately in a more flexible way [32], but also their techniques are applicable only for map functions that are matrix-by-matrix multiplications.In many practical applications such as computations in neural networks and machine learning, the map functions are non-linear and can be very complicated with little structure.This motivates us to investigate the MapReduce framework with straggling nodes for general map and reduce functions.In particular, we will present universal coded computing schemes that can be applied to arbitrary straggling systems, irrespective of the specific map and reduce functions.Moreover, we will show the optimality of our schemes among the class of universal schemes that do not rely on special properties of the map and reduce functions.
More specifically, we first propose a systematic construction of universal coded computing schemes for straggling systems from any placement delivery array for distributed computing (Comp-PDA) [8].A placement delivery array (PDA) is an array whose entries are either a special symbol " * " or some integer numbers called ordinary symbols.It was introduced in [33] to represent both the placement and the delivery of coded caching schemes with uncoded prefetching in a single array.In particular, the coded caching schemes proposed by Maddah-Ali and Niesen in [34] can be represented as PDAs [33].The corresponding PDAs will be referred to as MAN-PDAs, and, as we will see, they play a fundamental role also in coded computing with stragglers.PDAs have further been generalized to other coded caching scenarios such as device-to-device models [35], combination networks [36], networks with private demands [37], and medical data sharing problems [38].Moreover, several different PDA constructions have been proposed in [39]- [43].In this paper our focus is on a subclass of PDAs, called Comp-PDAs, which were introduced in [8] to design coded computing schemes for MapReduce systems without straggling nodes.In this paper, we show that Comp-PDAs can also be used to construct coded computing schemes for straggling systems, and we express the storage and computation loads of the obtained schemes in terms of the Comp-PDA parameters.
We then proceed to characterize the fundamental storagecommunication (SC) tradeoff for straggling systems by showing that the SC tradeoff curve achieved by MAN-PDA based schemes matches a new information-theoretic converse for universal computing schemes with stragglers.That means, our converse bounds the SC tradeoffs achieved by any coded computing schemes that apply to arbitrary map and reduce functions.For special map and reduce functions, e.g., linear functions, it is possible to find tailored coded computing schemes that achieve better SC tradeoffs, than the one implied by our information-theoretic converse, see e.g., [32].It is worth pointing out that the MAN-PDA based coded computing schemes adopt a fixed storage strategy irrespective of the active set size Q.This implies that the fundamental SC-tradeoff curve remains unchanged even if the active set size Q has not yet been determined during the map phase.The proposed schemes are thus also optimal in a scenario where the active set size Q is unknown during the map phase.
Finally, we study the complexity of optimal (or nearoptimal) coded computing schemes.In fact, a major practical limitation of the SC-optimal coded computing schemes based on MAN-PDAs is that they can only be implemented if the number of files N in the system grows exponentially with the number of nodes K.However, as we show in this paper, in most cases, MAN-PDAs achieve their corresponding fundamental SC pairs with smallest possible number of files, i.e., with smallest file complexity, among all Comp-PDA based coded computing schemes.This practical limitation is thus not a weakness of the MAN-PDAs, but seems inherent to PDAbased coded computing schemes for stragglers.Interestingly, the problem can be circumvented by slightly backing off from the SC-optimal tradeoff curve.We show that the schemes based on the Comp-PDAs in [33] achieve SC pairs close to the fundamental SC tradeoff curve but with significantly smaller number of files than the optimal MAN-PDAs.More precisely, we fix an integer q, let the number of nodes K be a multiple of q, and the storage load r be such that r K ∈ { 1 q , q−1 q } holds.We compare the Comp-PDA in [33] and the MAN-PDA for such pairs (K, r) while we let both of them tend to infinity proportionally.This comparison shows that the ratio of the minimum required number of files of the Comp-PDA in [33] and the MAN-PDA vanishes as O e K(1− 1 q ) ln q q−1 , while the ratio of their communication loads approaches 1.
At last we conduct numerical simulations to gain insights on how to choose the active set size Q in a practical system.In fact, choosing a large Q increases the duration of the map phase because it takes longer until Q nodes have terminated their computations.Increasing Q however also decreases the durations of the shuffle and reduce phases, because they are proportional to the communication load (which decreases with Q) and to the number of output functions computed at each node (which is inversely proportional to Q).In this paper we present a model for the total execution time of a coded computing scheme and numerically find the optimal choice of the active set size Q for this model.
We summarize the contributions of this paper: 1) We establish a general framework for constructing universal coded computing schemes for straggling systems from Comp-PDAs, and evaluate their SC pairs in terms of the Comp-PDA parameters.2) We derive the fundamental SC tradeoff for any universal straggling system by means of an information theoretic converse, which matches the SC pairs achieved by schemes based on the MAN-PDAs.3) We show that in most cases, points on the fundamental SC tradeoff curve can be achieved only with the same file complexity as MAN-PDA based schemes.Some points close to the fundamental SC tradeoff curve can however be achieved with significantly smaller file complexities.
The remainder of this paper is organized as follows.Section II formally describes our model, and Section III reviews the definitions of PDAs and Comp-PDAs; Section IV presents the main results of this paper; Section V presents numerical results; Sections VI to VIII present the major proofs of our results; and Section IX concludes this paper.
Notations: For positive integers n, k such that n ≥ k, we use the notations [n] {1, 2, . . ., n}, and The binary field is denoted by F 2 and the n dimensional vector space over F 2 is denoted by F n 2 .We use |A| to denote the cardinality of the set A, while for a signal X, |X| is the number of bits in X.The order of set operations is from left to right.Finally, 1(•) denotes the indicator function that evaluates to 1 if the statement in the parenthesis is true and it evaluates to 0 otherwise.

II. SYSTEM MODEL
A (K, Q) straggling system is parameterized by the positive integers K, Q, N, D, U, V, W, as described in the following.The system aims to compute D output functions φ 1 , . . ., φ D through K distributed computing nodes from N files.Each output function φ d : ) takes as inputs the length-W files in the library W = {w 1 . . ., w N }, and outputs a bit stream of length U, i.e., Assume that the computation of the output functions φ d can be decomposed as: where • the reduce function h d : Notice that a decomposition into map and reduce functions is always possible.In fact, trivially, one can set the map and reduce functions to be the identity and output functions respectively, i.e., f d,n (w n ) = w n , and , in which case V = W.However, to mitigate the communication cost during the shuffle phase, one would prefer a decomposition such that the length of the IVAs is as small as possible while still allowing the nodes to compute the final outputs.The computation is carried out through three phases, namely, the map, shuffle, and reduce phases.
1) Map Phase: Each node k ∈ [K] stores a subset of files M k ⊆ W, and tries to compute all the IVAs from the files in M k , denoted by C k : Each node has a random amount of time to compute its corresponding IVAs.To limit latency of the system, the coded computing scheme proceeds with the shuffle and reduce phases as soon as a fixed number of Q ∈ [K] nodes have terminated the map computations.These nodes are called active nodes, and the set of all active nodes is called active set, whereas the other K−Q nodes are called straggling nodes.For simplicity, we consider the symmetric case in which each subset Q ⊆ [K] of size |Q| = Q is active with same probability.Let the random variable Q denote the random active set.Then, In our model, we also assume that the map phase has been designed in a way that all the files can be recovered1 from any active set of size Q.Hence, for any file w n ∈ W, the number of nodes storing this file t n must satisfy The output functions φ 1 , . . ., φ D are then uniformly assigned2 to the nodes in Q.Let D Q k be the set of indices of output functions assigned to a given node k ∈ Q.
Denote the set of all the partitions of [D] into Q equal-sized subsets by ∆.
2) Shuffle Phase: The nodes in Q proceed to exchange their computed IVAs.Each node k ∈ Q multicasts a signal denotes the encoding function of node k.We assume a perfect multicast channel, i.e., each active node k ∈ Q receives perfectly all the transmitted signals Reduce Phase: Using the received signals X Q from the shuffle phase and the local IVAs C k computed in the map phase, node k has to be able to compute all the IVAs Finally, with the restored IVAs, it computes each assigned function via the reduce function, namely, To measure the storage and communication costs, we introduce the following definitions.
Definition 1 (Storage Load).Storage load r is defined as the total number of files stored across the K nodes normalized by the total number of files N, i.e., Definition 2 (Communication Load).Communication load L is defined as the average total number of bits sent in the shuffle phase, normalized by the total number of bits of all intermediate values, i.e., where the expectation is taken over the random active set Q.

Definition 3 (Storage-Communication (SC) Tradeoff).
A pair of real numbers (r, L) is achievable if for any > 0, there exist positive integers N, D, U, V, W, a storage design {M k } K k=1 of storage load less than r + , a set of uniform assignments of output functions Γ Q Q∈Ω Q K , and a collection of encoding with communication load less than L + , such that all the output functions φ 1 , . . ., φ D can be computed successfully.For a fixed Q ∈ [K], we define the fundamental storage-communication (SC) tradeoff as Indeed, if r > K, then each node can store all the files and compute any function locally.On the other hand, from the assumption (2), we have for any feasible scheme Therefore, throughout the paper, we only focus on the interval r ∈ , by the symmetry assumption of the reduce functions and the fact that each node has all the IVAs of all D output functions in the files it has stored, the optimal communication load is independent of the reduce function assignment.This is similar to the case without straggling nodes (see [5,Remark 3]).
Definition 4 (File Complexity).The smallest number of files N required to implement a given scheme is called the file complexity of this scheme.
In above problem definition, the various nodes store entire files during the map phase and during the shuffle phase they reconstruct all the IVAs corresponding to their output functions.This system definition does not allow to reduce storage or communication loads by exploiting special structures of the map or reduce functions as proposed for example in [31], [32].As a consequence, all the coded computing schemes presented in this paper universally apply to arbitrary map and reduce functions and the SC tradeoff in Definition 3 applies only to such universal schemes.In fact, as we will explain, for linear reduce functions the SC tradeoff derived in [32] improves over the one in Definition 3, since it was derived for a system where nodes do not have to store individual files and reconstruct all the required IVAs, but linear combinations thereof suffice.

A. Definitions
Placement delivery arrays (PDA) introduced in [33] are the main tool of this paper.To adapt to our setup, we use the following definitions.
Definition 5 (Placement Delivery Array (PDA)).For positive integers K, F, T and a nonnegative integer S, an , composed of T special symbols " * " and some ordinary symbols 1, . . ., S, each occurring at least once, is called a (K, F, T, S) PDA, if for any two distinct entries a j1,k1 and a j2,k2 , we have a j1,k1 = a j2,k2 = s with s an ordinary symbol only if a) j 1 = j 2 , k 1 = k 2 , i.e., they lie in distinct rows and distinct columns; and b) a j1,k2 = a j2,k1 = * , i.e., the corresponding 2 × 2 subarray formed by rows j 1 , j 2 and columns k 1 , k 2 must be of the following form s * * s or * s s * .
A PDA with all " * " entries is called trivial.Notice that in this case S = 0 and KF = T.A PDA is called a g-regular PDA if each ordinary symbol occurs exactly g times.
For our purpose, we introduce the following definitions similarly to the ones in [8].

Definition 6 (PDA for Distributed Computing (Comp-PDA)).
A Comp-PDA is a PDA with at least one " * "-symbol in each row.

Definition 7 (Minimum Storage Number
).Given a Comp-PDA A, its minimum storage number τ is defined as the minimum number of " * "-symbols in any of the rows of A.
Definition 8 (Symbol Frequencies).For a given nontrivial (K, F, T, S) Comp-PDA, let S t denote the number of ordinary symbols that occur exactly t times, for t ∈ [K].The symbol frequencies θ 1 , θ 2 , . . ., θ K of the Comp-PDA are then defined as They indicate the fractions of ordinary entries of the Comp-PDA that occur exactly 1, 2, . . ., K times, respectively.For completeness, we also define θ t 0 for t > K.

B. Constructing a Coded Computing Scheme from a Comp-PDA: A Toy Example
In this subsection we illustrate the connection between Comp-PDAs and coded computing schemes with stragglers at hand of a toy example.Section VI ahead describes a general procedure to obtain a coded computing scheme with stragglers from any Comp-PDA, and it presents a performance analysis for the obtained scheme.
Consider the (4, 6, 12, 4) Comp-PDA A in Example 1, and assume a (K, Q) = (4, 3) straggling system with N = 6 files and D = 3 output functions.The scheme is illustrated in Fig. 1 for the case that node 3 is straggling.In this Fig. 1, the line "files" in each of the four boxes indicates the files stored at the nodes.The remaining lines in the boxes illustrate the computed IVAs, where red circles, green triangles, and blue squares depict IVAs pertaining to output functions φ 1 , φ 2 , and φ 3 , respectively.More specifically, a red circle with the number i ∈ {1, 2, . . ., 6} in the middle stands for IVA v 1,i , and so on.The lines below the boxes of the active nodes 1, 2, and 4 indicate the IVAs that the nodes have to learn to compute their output functions.In this example it is assumed that node 1 computes φ 1 , node 2 computes φ 2 , and node 4 computes φ 3 .The signals on the left/right side of the boxes indicate the signals sent by the nodes.Here, splitting of IVAs indicates that the IVA is decomposed into a substring consisting of the first half of the bits and a substring consisting of the second half of the bits, and the plus symbol stands for a bit-wise XOR-operation on the substrings.
We now explain the distributed coding scheme associated with the PDA A. We start by associating column k of A with node k and row j of A with file w j in the system, (k ∈ [4], j ∈ [6]).In the map phase, node k stores file w j if the (j, k)-th entry of A is a " * "-symbol.For example, the first column of A indicates that, node 1, stores files w 1 , w 2 and w 3 .Each node then computes all the IVAs corresponding to the files it has stored.So node 1 computes Assume that node 3 is the only straggler.Nodes 1, 2, and 4 thus form the active set and continue with the shuffle and reduce procedures.Accordingly, we extract from the PDA A the subarray A {1,2,4} consisting of columns 1, 2 and 4 (the columns corresponding to the active set): Notice that A {1,2,4} is also a Comp-PDA (in particular it has at least one " * " symbol in each row) and the node corresponding to a given column has stored all the files indicated by the " * "-symbols in this column.After the shuffling phase, we are thus in the same situation as described in [8], [44] when a coded computing scheme without stragglers is to be constructed from a Comp-PDA, and as a consequence, the same shuffle and reduce procedures can be applied.We described these procedures here in detail for completeness.
The shuffle phase is as follows.The output functions φ 1 , φ 2 , φ 3 are allocated to nodes 1, 2, 4 respectively.For each s ∈ {1, 2, 3, 4} occuring g times (g = 2 or 3), pick out the g×g array containing s.For example, symbol s = 2 is associated with the following 3-by-3 subarray: Each occurence of the symbol "2" in this subarray stands for an IVA desired by the node in the corresponding column and computed at the other nodes in this subarray.The row of the symbol indicates the file this IVA pertains to.The " * " symbols in this row indicate that the IVA can indeed be computed by all nodes in this subarray except for the one associated to the column of the "2" symbol.In the above example, the three "2" symbols from top to down represent the IVAs v 3,1 , v 2,3 , and v 1,5 , respectively.These IVAs are shuffled in a coded manner.To this end, they are first split into g − 1 = 2 equally-large sub-IVAs, and each of these sub-IVAs is labeled by one of the nodes where the IVA has been computed (i.e., by the columns with " * " symbols).The signal sent by a given node i is then simply the componentwise XOR of the sub-IVAs with label i.
In our example, we split , respectively.The same procedure is applied for all other ordinary symbols 1, 3, and 4 in subarray A {1,2,4} .The following table lists all the signals sent at the 4 nodes, where the first line lists their associated ordinary symbols: In the reduce phase, the nodes extract their missing IVAs as follows.Since node 1 has computed v 1,1 , v 1,2 and v 1,3 in the map phase, it still needs to decode v 1,4 , v 1,5 , v A similar procedure is also applied for any other possible realization of the active set Q of size Q = 3.
In the above scheme, the total number of IVAs computed at all nodes is 3 × 4 = 12, and the storage load is thus r = 12 N = 2.The total length of the transmitted signals is 7.5V, and remains unchanged, irrespective of the realization of the active set Q (as long as it is of size Q = 3).The communication load of the system is thus L = 7.5V 6×3×V = 5 12 .

IV. MAIN RESULTS
In this section, we present our main results.Details and proofs are deferred to Sections VI-VIII.

A. Coded Computing Schemes for Straggling Systems from Comp-PDAs
In Section VI, we propose a coded computing scheme for a (K, Q) straggling system based on any Comp-PDA with K columns and minimum storage number τ ≥ K − Q + 1. Theorem 1 is proved by analyzing the coded computing scheme, which is deferred to Section VI-B.
Theorem 1. From any given (K, F, T, S) Comp-PDA A with symbol frequencies {θ t } K t=1 and minimum storage number τ ∈ [K − Q + 1 : K], one can construct a coded computing scheme for a (K, Q) straggling system achieving the SC pair with file complexity F.
Theorem 1 characterizes the performance of the coded computing scheme obtained from a Comp-PDA as described in Section VI in terms of the Comp-PDA parameters.In the following, we will simply say that a Comp-PDA achieves this performance.
Notice that the file complexity of any Comp-PDA based scheme coincides with the number of rows F of the Comp-PDA.We shall therefore call the parameter F of a Comp-PDA its file complexity.
As we show in the following, Theorem 1 can be simplified for regular Comp-PDAs.
Corollary 1. From any given g-regular (K, F, T, S) Comp-PDA A, with g ∈ [K] and minimum storage number τ ∈ [K − Q + 1 : K], one can construct a coded computing scheme for a (K, Q) straggling system achieving the SC pair with file complexity F.
Proof: From Theorem 1, we only need to evaluate L A when A is a g-(K, F, T, S) Comp-PDA.In this case, all the S symbols occur g times, i.e., θ g = 1, and θ t = 0, ∀ t ∈ [K]\{g}.
Then the conclusion directly follows from Theorem 1.
Corollary 1 is of particular interest since there are several explicit regular PDA constructions for coded caching in the literature, such as [33], [42], [43], which are also Comp-PDAs.In particular, the following PDAs obtained from the coded caching scheme proposed by Maddah-Ali and Niesen [34] are important.
We observe that for any i ∈ [K − 1], the array P i is an Comp-PDA (see [33] for details).For i = K, the Comp-PDA P i consists only of " * "-entries and is thus a trivial PDA.By Corollary 1, we directly obtain the following result.
Corollary 2. Consider a (K, Q) straggling system and a positive integer r ∈ [K − Q + 1 : K].On such a straggling system, the MAN-PDA P r achieves the storage load r and communication load The coded computing scheme associated to P r is equivalent to our proposed coded computing for straggling systems (CCS) in [1].Here, we present it as a special case of the more general Comp-PDA framework.As we shall see, the Comp-PDA framework allows us to design new coded computing schemes with smaller file complexities.

B. The Fundamental Storage-Communication Tradeoff
We are ready to present our result on the fundamental SC tradeoff, which is proved in Section VII.
Theorem 2. For a (K, Q) straggling system, with a given integer storage load r in the discrete set which is achievable with a scheme of file complexity C r K .For a general r in the interval is given by the lower convex envelope formed by the above points in (6).Fig. 2 shows the fundamental SC tradeoff curves for K = 10 and different values of Q.When Q = 1, the curve reduces to a single point (K, 0), while when Q = K, the curve corresponds to the fundamental tradeoff without straggling nodes (cf.[5, Fig. 1]).In this latter case without stragglers, the fundamental SC tradeoff curve is achieved by the CDC scheme in [5].For a general value of Q and integer storage r ∈ [K − Q + 1 : K], the fundamental SC tradeoff pair (r, L * K,Q (r)) is achieved by the MAN-PDA P r , see Corollary 2. This implies that for a fixed integer storage load r ∈ [1 : K], the SC pairs {(r, L * K,Q (r))} K Q=K−r+1 are all achieved by the same PDA P r , irrespective of the size of the active set Q.As we show in Section VI-A, the map procedures of the coded computing scheme corresponding to a given Comp-PDA at a given node k only depends on the " * "-symbols in the k-th column of the PDA.Therefore, all the points on the fundamental SC tradeoff curve with same integer storage load r can be attained with the same map procedures described by the MAN-PDA P r .(See also Remark 3 in Section VI-A.) As a consequence, the fundamental SC-tradeoff points that have integer storage load r ∈ [1, K] remain achievable (and optimal) also in a related setup where the size of the active set Q is unknown during the map procedure.By simple time and memory-sharing arguments, this conclusion extends to all points on the fundamental SC tradeoff curve with arbitrary real-valued storage loads r ∈ [1, K].This also relates to the scenario where the system imposes a hard time-limit for the map phase and proceeds to the shuffle and reduce phases with the (random) number of nodes that have terminated within due time.For given storage load r, the MAN-PDA based coded computing scheme promises that when Q ≥ K + 1 − r nodes have terminated during the map phase, all IVAs are computed at least once and thus the system can proceed to data shuffling, and achieves the minimum required communication load L * K,Q (r).When only Q < K + 1 − r nodes have terminated, some IVAs are not computed, and hence the system cannot proceed.
It is further worth pointing out that all our PDA based coded computing schemes are universal and achieve the same performance for any choice of map and reduce functions.No structure is assumed on these functions.Similarly, our information-theoretic converse applies only to such universal coded computing schemes.If the map or reduce functions have certain properties, for example, linearity, it is possible to achieve better SC tradeoffs by storing combinations of files instead of each file separately [31], [32].Fig. 3 compares Theorem 2 to the results in [31], [32].It can be observed that the MAN-PDA based scheme outperforms the scheme in [31] but is inferior to the improved version in [32].As already mentioned, the scheme in [32] however works only for linear map functions, and not for arbitrary functions as our schemes.Another advantage of our schemes is that they work over the binary field, and are thus easier to implement than the MDSbased schemes in [31], [32].which require a large enough field size.

C. Optimality and Reduction of File Complexity
From Theorem 1 and Corollary 2, the coded computing scheme based on the MAN-PDA P r , for r ∈ [K − Q + 1 : K], has file complexity F = C r K and achieves the fundamental SC tradeoff.The following theorem indicates that, this is the smallest file complexity to achieve the same tradeoff point in most cases.The proof is deferred to Section VIII.Theorem 3. Consider a (K, Q) straggling system and a Comp-PDA based scheme achieving the fundamental SC tradeoff r, L * K,Q (r) for some r ∈ Remark 1.It is easy to verify that, in the case Q ∈ {2, K} and r = K − Q + 1, the fundamental SC tradeoff can be achieved with F = 1 with the Comp-PDAs [ * , * , . . ., * , 1] and [ * , 1, 2, . . ., K − 1], respectively.
can be achieved with file complexity This results in a r-regular K, q r−1 , Kq r−2 , (q − 1)q r−1 Comp-PDA with minimum storage number r, and the proof is then immediate from Corollary 1.If K − r|K, then specialize the Comp-PDA in P2) to parameter q = K K−r .This results in a r-regular K, (q − 1)q K−r−1 , K(q − 1) 2 q K−r−2 , q K−r−1 Comp-PDA, and the proof again follows from Corollary 1.
In the following proposition, we quantify how close the above SC tradeoff point is to the optimal, and by how much we can reduce the file complexity.
Proposition 1.Consider a (K, Q) straggling system and an integer r 6 ], such that the SC tradeoff L K,Q (r) and the file complexity F achieved by constructions P1) or P2) above satisfy where A q √ q−1 cq and B q q q−1 q−1 q .
The proof is given in Appendix B. From the above proposition, for a fixed integer q, whenever r K ∈ { 1 q , q−1 q } and K, r scale proportionally to infinity, the communication load is close to optimal, while the file complexity can be reduced by a factor that increases exponentially in K.
Remark 2. In this work, we only consider two particular PDAs.There has been extensive research in coded caching schemes with low subpacketization level using various approaches.Most of them have an equivalent PDA representation.For example, PDAs can be constructed from hyper-graphs [42], bipartite graphs [43], linear block codes [45], Ruzsa-Szemerédi graphs [46].The result in Theorem 1 makes it possible to apply all these results straightforwardly to straggling systems.

V. NUMERICAL RESULTS
The goal of this section is to provide insights on how to choose the active set size Q in a practical system that employs either the (SC-tradeoff optimal) MAN-PDA based schemes or the low-complexity Comp-PDA based schemes of Section IV-C.
In our system, the map-phase computation times at the various nodes are random and independent of each other.The map-phase computation time of node k ∈ [K] is denoted T k and follows a shifted exponential distribution [15]: where t 0 is the minimum time for node k to accomplish its computation, and µ > 0 is a given delay parameter.The map phase is terminated as soon as a given number Q of nodes have terminated their computations.Thus, the duration of the map phase is given by the Q-th order statistics T (Q) of the tuple T 1 , . . ., T K .By a standard result of order statistics of exponential distributions [47, pp. 18], T (Q) follows the same distribution as the weighted sum The total execution time of the distributed computing scheme is given by the sum of the durations of the map, shuffle, and reduce phases.We assume that (i) the duration of the map phase is T (Q) ; (ii) the duration of the shuffle phase is proportional to the communication load, so αL (Q) (r), for some factor α > 0 and given storage load r; (iii) the duration of the reduce phase is proportional to the inverse of Q, so β Q , for some factor β > 0. This is motivated by the fact that the number of reduce functions that each node has to compute is D Q .For a fixed active set size Q and given r, the random total execution time of the distributed computing scheme is thus: and the expected running time is:  Notice that since T 1 , . . ., T K are i.i.d.random variables, each subset of [K] of size Q is equally likely to be the active set.For a coded computing scheme based on a PDA A, the expected communication load E[L (Q) (r)] is thus given by (5).More specifically, for the MAN-PDA based schemes the communication load is characterised in (6) and for the lowcomplexity Comp-PDA based scheme in (7).Notice further that for given factors µ, α, β > 0 and parameters r, K, N, D, t 0 , the expected duration of the map phase, E[T (Q) ], is increasing in Q, whereas the durations of the shuffle and reduce phases, Q , are both decreasing in Q.This can also be verified at hand of Fig. 4 which shows the durations of the map, shuffle, and reduce phases of the MAN-PDA based schemes and the low-complexity Comp-PDA schemes for parameters K = 20, r = 10, t 0 = 1, µ = 0.5, α = 100 and β = 10 or β = 1, and for different values of the active set size Q.(The parameters have been chosen so that the shuffle phase dominates the other phases.This behaviour has been observed in [5] in their experiments on Amazon EC2 clusters.)The choice of the active set size Q that minimizes the total execution time thus depends on the weights µ, α, β.For example, for parameters K = 20, r = 10, t 0 = 1, µ = 0.5, α = 100 and β = 1, both for the MAN-PDA schemes and for the low-complexity Comp-PDA scheme, the total execution time is minimized for active set size Q = 16, see Fig. 5. Fig. 6 shows the optimal choice of Q as a function of the delay parameter µ, where the other parameters are set as described above.We observe that this optimal choice of Q is increasing in µ.The reason is that increasing values of µ imply shorter map-phase computation times at the nodes.In this case it is advantageous to choose the active sets Q large, because it will cause only a small increase in the map-phase but a substantial decrease in the durations of the shuffle and reduce phases.Fig. 7 depicts the expected execution time under the optimal choice of Q for both the MAN-PDA based scheme and the low-complexity Comp-PDA based scheme as a function of µ, for two values of β.It is observed that the expected execution time of the low-complexity Comp-PDA based schemes of Section IV-C is very close to the expected execution time of the MAN-PDA based schemes.The reason is that the choice of the PDA structure only affects the average communication load E[L (Q) ] and thus only the duration of the shuffle phase, but not the duration of the map and reduce phases.Since the MAN-PDA based schemes and the low-complexity schemes have comparable communication loads, see Proposition 1, according to (8) the two schemes must also have comparable total execution times.

VI. CODED COMPUTING SCHEMES FOR STRAGGLING SYSTEMS FROM COMP-PDAS (PROOF OF THEOREM 1)
In this section, we prove Theorem 1 by describing how to construct a coded computing scheme from a given Comp-PDA and analyzing its performance.
A. Constructing a Coded Computing Scheme for a Straggling System from a Comp-PDA In [8], we described how to obtain a coded computing scheme without stragglers from any given Comp-PDA.A similar procedure is possible in the presence of stragglers if the minimum storage number τ ≥ K − Q + 1.In fact, assume a given Comp-PDA A. The storage design in the map phase corresponding to A is the same as without straggling nodes.As part of the map phase, each node computes all the IVAs that it can compute from its stored files.For the reduce phase of the straggling system, we restrict to the subarray A Q of A formed by the columns of A with indices in the active set Q. Notice that A Q is again a Comp-PDA, because the minimum storage number of A is at least K−Q+1 and after eliminating K − Q columns from A each row still contains at least one " * " symbol.Shuffle and reduce phases are performed as in a non-straggling setup, see [8], but where the Comp-PDA A is replaced by the new Comp-PDA A Q .For completeness, we explain the map, shuffle, and reduce phases in detail.
Fix a (K, F, T, S) N F files and so that W 1 , W 2 , . . ., W F form a partition for W. It is implicitly assumed here that η is an integer number.
1) Map Phase: Each node k stores the files in and computes the IVAs in (1).The map phase terminates whenever any Q nodes accomplish their computations.Throughout this section, let Q = Q be the realization of the active set.Then, A Q denotes the subarray of A composed of the columns in Q.Also, let g Q s denote the number of occurrences of the symbol s in A Q , i.e., g Q s = |{(i, k) : a i,k = s, k ∈ Q}|, and I Q be the set of symbols occuring only once in A Q : For each s ∈ I Q , let (i, j) be the unique pair in [F] × Q such that a i,j = s.Since the number of " * " symbols in the i-th row of A is equal or larger than K − Q + 1 by the assumption τ ≥ K − Q + 1, there exists at least one k ∈ Q such that a i,k = * .Arbitrarily choose such a k and assign s into k denote the set of ordinary symbols in column k occurring at least twice:

Pick any uniform assignment of reduce functions
denote the set of IVAs for node j computed from the files in W i , i.e., 2) Shuffle Phase: Node k multicasts the signal where the signals X Q k,s are created as described in the following, depending on whether s ∈ I k or s ∈ A Q k .For all s ∈ I k , set where (i, j) is the unique index in [F] × Q such that a i,j = s.
To describe the signal X Q k,s for s ∈ A Q k , we first describe a partition of the IVA U Q i,j for each pair (i, j) 1 subsets of equal size and denote these subsets by U Q,j1 i,j , U Q,j2 i,j , . . ., U : For all s ∈ A Q k , set 3) Reduce Phase: Node k computes all IVAs in In the map phase, node k has already computed all IVAs in Fix an arbitrary i ∈ [F] such that a i,k = * , and set 12) can be restored by node k from the signal X Q j,s sent by node j (see ( 13)): where ) indicate the other g Q s − 2 occurrences of the symbol s in A Q , i.e., a lt,jt = s.Notice that the sub-IVAs on the right-hand side of ( 14) have been computed by node k during the map phase, because by the PDA properties, a lt,jt = a i,k = s and j t = k imply that l t = i and a lt,k = * .Therefore, U Q,j i,k can be decoded from (14).
If s / ∈ A Q k , then s ∈ I Q by (10).There exists thus an index j ∈ Q\{k} such that s ∈ I j and therefore, by (11), the subset U Q i,k can be recovered from the signal X Q j,s sent by node j.Remark 3. It is worth pointing out that the storage design {M k } K k=1 only depends on the positions of the " * " symbols in A, but not on the parameter Q (See ( 9)).This indicates that, in practice the map phase can be carried out even without knowing how many nodes will be participating in the shuffle and reduce phases.

B. Performance Analysis
We have analyzed the performances of storage and communication loads in the no-stragglers setup in [8].For the scheme in the preceding subsection, the analysis of storage load follows the same lines as in [8].When computing the communication load defined in (3), we have to average over all realizations of the active set Q.
1) Storage Load: Since the Comp-PDA A contains T " * " symbols, and each " * " symbol indicates that a batch of η = N F files is stored at a given node, see (9), the storage load of the proposed scheme is: 2) Communication Load: We first analyze the length of the signals sent for a given realization of the active set Q = Q.For any s ∈ [S], let g s be the occurrence of s in A, and g Q s be the occurrence of s in the columns in Q.By (11) and ( 13), the length of the signals associated to symbol s is when Q is the active set.The total length of all the signals is thus We now compute the communication load as defined in (3) where we have to average over all realizations of the active set Q: in (e), we defined S g to be the number of ordinary symbols occurring g times for each g ∈ [K]; in (f ), we used the equality C l g = g l •C l−1 g−1 ; in (h), we eliminated the indices of zero terms in the summation of (g); and (i) follows from the definition of symbol frequencies.

C. File Complexity of the Proposed Schemes
The analysis of file complexity is similar to the no-straggler setup in [8].The files are partitioned into F batches so that each batch contains η = N F > 0 files.It is assumed that η is a positive integer.The smallest number of files N where this assumption can be met is F. Therefore, the file complexity of the scheme is F.

VII. THE FUNDAMENTAL STORAGE-COMMUNICATION TRADEOFF (PROOF OF THEOREM 2)
By Corollary 2, the SC pair r, L * K,Q (r) , r ∈ [K−Q+1 : K] can be achieved by the MAN-PDA P r .For a general noninteger r ∈ [K − Q + 1, K], the lower convex envelope of these points can be achieved by memory-and time-sharing.It remains to prove the converse in Theorem 2.
Let Z Q K (x) be a piecewise linear function connecting the points u, Z Q K (u) sequentially over the interval We shall need the following lemma, proved in Appendix A.
Lemma 1.The sequence Z Q K (u) is strictly convex and decreasing for u ∈ Let M {M k } K k=1 be a storage design and (r, L) be a SC pair achieved based on i.e., a M,u is the number of files stored u times across all the nodes.Then by definition, a M,u satisfies For any is the number of files stored exactly l times in the nodes of set Q. Since any file that is stored u times across all the nodes has l occurrences in exactly Summing over n ∈ [N], we obtain Now we use the result in [5, Lemma 1] to lower bound the communication load for any realization of the active set The average communication load over the random realization of the active set Q is then obtained as:  (19); and (f ) follows from the fact r ≤ r + .Since can be arbitrarily close to zero, we conclude In particular, when r ∈ [K − Q + 1 : K], by (17), , where (a) holds since for l = Q, the term in the summation is zero.This establishes the desired converse proof.

VIII. OPTIMALITY OF FILE COMPLEXITY (PROOF OF THEOREM 3)
In order to prove Theorem 3, we need to first derive several lemmas.Therefore, by definition of a M,u in (18), this indicates that each file is stored exactly r times across the system.

A. Preliminaries
Lemma 3. Consider a g-regular (K, F, T, S) PDA where K ≥ g ≥ 2. If there are exactly g − 1 " * "s in each row, then F ≥ C g−1 K .Proof.With Definition 5 (the definition of PDAs), the conclusion follows directly from [33, Lemma 2].
For each u ∈ [K], define Proof: For each u ∈ [2 : K − 1], by (22), ≤ 0, where in (a), we used (29); in (b), we separated the two summations in (a) and eliminated the indices of zero terms in the separated summations; and in (c), we used the variable change l = l + 1.Moreover, if u ≥ 2 and Q ≥ 3, from (23), where in (a), we used the fact l l−1 ≤ 2 for any l ≥ 2. To prove the second part, we first note that, by Corollary 3, the number of batches required by constructions P1) and P2) is On the other hand, to achieve the fundamental SC tradeoff, the number of required batches is • q 2π(q − 1)K where (a) follows by applying Stirling's approximation √ 2πn n+ 1 2 e −n ≤ n! ≤ e √ 2πn n+ 1 2 e −n to both the numerator and the denominator.Taking the ratio F F * using ( 31) and (32), we complete the proof of the second part.

Fig. 1 :
Fig.1: An example of the CCS scheme for a system with K = 4, N = 6 and Q = 3, where the third node is a straggling node.

Definition 9 (
Maddah-Ali Niesen PDA (MAN-PDA)).Fix any integer i ∈ [K], and let {T j } C i K j=1 denote all subsets of [K] of size i.Also, choose an arbitrary bijective function κ from the collection of all subsets of [K] with cardinality i + 1 to the set C i+1 K .Then, define the array
a) holds by(16); (b) holds since for each s ∈ [S],Q l=0 1(g Q s = l) = 1 and K g=1 1(g s = g) = 1; (c) follows from(15); and (d) holds because if a symbol s occurrs exactly g times in a PDA A, then there are C a) follows from (20); (b) holds because the inner summation in (a) only includes summation indices u ∈ [K−Q : K] and it includes the summation index u ∈ {K − Q + 1, . . ., K} if, and only if, the outer summation index l satisfies l ≤ u and l ≥ u + Q − K; (c) follows from (17); (d) follows from Lemma 1; (e) follows from

Lemma 2 .
If a coded computing scheme achieves the fundamental SC tradeoff pair r, L * K,Q (r) for any integer r ∈ [K − Q + 1 : K], then each file is stored exactly r times across the nodes.Proof: According to Lemma 1, the sequenceZ Q K (u) K u=K−Q+1 is strictly convex.Thus for the integer r = u ∈ [K − Q + 1 : K]\{r}.