On the estimation of spatial density from mobile network operator data

We tackle the problem of estimating the spatial distribution of mobile phones from Mobile Network Operator (MNO) data, namely Call Detail Record (CDR) or signalling data. The process of transforming MNO data to a density map requires geolocating radio cells to determine their spatial footprint. Traditional geolocation solutions rely on Voronoi tessellations and approximate cell footprints by mutually disjoint regions. Recently, some pioneering work started to consider more elaborate geolocation methods with partially overlapping (non-disjoint) cell footprints coupled with a probabilistic model for phone-to-cell association. Estimating the spatial density in such a probabilistic setup is currently an open research problem and is the focus of the present work. We start by reviewing three different estimation methods proposed in literature and provide novel analytical insights that unveil some key aspects of their mutual relationships and properties. Furthermore, we develop a novel estimation approach for which a closed-form solution can be given. Numerical results based on semi-synthetic data are presented to assess the relative accuracy of each method. Our results indicate that the estimators based on overlapping cells have the potential to improve spatial accuracy over traditional approaches based on Voronoi tessellations.

notwithstanding a number of open challenges in accessing and making use of such data [6].
Beyond the scope of academic research, companies and nongovernmental organizations are offering insights and statistical products derived from MNO data in various application domains, from humanitarian support to analysis of tourism flows, and statistical organizations are looking with increasing interest at MNO data as a potential source for compiling new official statistics [7], [8], [9]. However, despite the great volume of research literature on the topic, several methodological aspects remain open along the journey from raw MNO data to reliable summary information.
The focus of this contribution is the problem of measuring the spatial density of mobile phones at a given reference time based on the data from a single MNO collected. The density of mobile phones provides a rough proxy of the spatial distribution of humans in a given territory around that time, also called "present population", "hourly population" or "de facto population".
The data processing flow, from raw MNO data to final density estimates, involves modeling the spatial coverage patterns of radio cells or groups thereof, i.e., mapping each radio cell to a geographical territory. This logical function, called "cell geolocation" in earlier literature [10], [9], can be performed in different ways with varying levels of sophistication as to what kinds of data are taken into consideration and how the radio propagation phenomenon is modeled. Choosing a more elaborate method accounts to attempting a more detailed and complex modeling of the mobile network infrastructure in order to potentially, but not necessarily, achieve better accuracy.
Geolocation methods can be grouped into two classes, discriminated by whether the set of radio cell locations are modeled as spatially disjoint or, conversely, overlapping between cell locations is allowed. The methods involving disjoint (non-overlapping) cell locations, or "tessellations", are simpler to implement and by far more popular in existing literature. Only recently, a few pioneering papers such as [11], [12], [13], [14] have started to consider alternative, more elaborate schemes based on overlapping cells.
A separate module along the data workflow, logically subsequent to the geolocation module, performs the task of estimating (or inferring) the underlying spatial density from the set of geolocalized data. There is a fundamental interdependency between these two modules: if the geolocalization method of choice belongs to the specific class of tessellations, then the estimation task reduces to a simple area-proportional solution. Conversely, if the geolocalization method belongs to arXiv:2009.05410v3 [eess.SP] 23 Nov 2021 the more general class of overlapping locations, determining the "best" estimation method is an open research problem, that is the focus of the present paper. Given this background, we make here the following contributions: • We provide a coherent representation of the general processing workflow, from individual MNO records to spatial density estimates, with a clear definition of the geolocation and estimation modules. • Focusing on estimation methods for overlapping cells, we review three distinct solutions from previous literature and provide novel analytical insights about their properties and mutual relationships. • We derive a novel ad hoc estimator with a closed-form solution. • We provide quantitative insights by comparing the accuracy of the different solutions on a semi-synthetic scenario.
Along the way, we identify directions for further research. The rest of this paper is organized as follows. We start by providing a general view of the data transformation flow in Section II, from the input data sources to the desired output information, and formulate the density estimation problem analytically. In Section III, we review a number of estimation solutions that were proposed earlier in the literature, for which we present novel analytical results that re-interpret these methods and provide further insights about their mutual relationships, in Section IV. In Section V, we propose a completely novel ad hoc estimator for the problem at hand. Illustrative numerical results based on (semi-)synthetic data are presented in Section VI. Finally, we conclude the paper in Section VII.

A. Desired output
We address the problem of estimating the spatial density of mobile phones in a given territory around a reference time t * starting from the data collected from the network of a single mobile network operator. The mobile phone density obtained from single MNO data provides a rough proxy for the human population density, in the sense that the variations in space and time of two distributions can be reasonably expected to be strongly correlated. Therefore, the estimated mobile phone density can be used directly in those application where the spatio-temporal variations are of interest, i.e., where and when density increases or decreases. Furthermore, mobile phone density estimation provides a basis for the accurate estimation of human population density if additional calibration data are available 1 , but the aspects related to phone-to-human density transformation problem fall outside the scope of the present contribution. 1 The accurate estimation of the absolute value of human population density from mobile phone density is a non-trivial task since humans do not map oneto-one to mobile phones: some persons carry no phone, other persons carry multiple phones, and some phones are not carried by a single person or by no person at all. Calibration methods clearly depend on the kind of available calibration data, e.g., census data or survey data.

B. Data processing workflow
In this section, we provide a general modular view of the data processing workflow. For the sake of generality, we make an abstraction effort and distinguish, for each module, the logical task to be performed (what the module does) from the particular method that may be adopted to perform such task (how the module does it). In this way, the proposed general workflow can be particularized to represent a broad set of different specific methodologies, allowing to explore various trade-offs in the methodological solution space for the problem at hand.
The general workflow is sketched in Fig. 1, separately for the two families of geolocation methods that are introduced below.
The processing workflow takes two logically distinct sources of data in input, namely "event data" and "radio network data". These data are pre-processed separately and then combined.
Event data result from transaction events between the mobile phone and the network infrastructure, e.g., calls, data exchange or signaling procedures. A mobile phone z that at time t interacts with the mobile network through the radio cell i generates an event record z, t, i . In practice z represents the phone pseudonym 2 , t the event timestamp and i the radio cell identifier 3 . Depending on how the data collection system is configured, only a subset of all events are captured into the data set. Call Detail Records (CDR) and the more informative signaling data fall in this category. Recall that we aim at inferring spatial density with reference to particular time instant t * from the event records collected around that time. The phone-to-cell attribution module reduces the set of event data to a vector of cell counters c def = [c 1 c 2 · · · c I ] T wherein the ith element c i denotes the number of phones counted in cell i = 1, . . . , I. A simple method to perform the phone-to-cell attribution is sketched in Fig. 2: for each generic phone z, the event record that is temporarily closest to the reference time t * is selected and the phone location is attributed to cell index i therein. In this way, increasing the frequency of observation -as done by considering full signalling data instead of less frequent CDR -reduces the measurement noise associated to the risk that the (unknown) phone position at the reference time t * falls outside the attributed cell i. 2 The phone pseudonym is typically based on some non-reversible hash of the International Mobile Subscriber Identifier (IMSI). 3 Under certain conditions, e.g., signaling data captured on the Radio Access Network (RAN) links, additional variables which could help identify the mobile phone location at sub-cell level may be available in addition to the cell identifier i, e.g., the so-called Timing Advance (TA) allowing to determine upper and lower bounds on the distance between the phone and the base station antenna, or radio measurements from neighboring cells as used, e.g., in Location Based Systems (LBS). These additional variable(s) should be considered absorbed by the the data element i that, in such cases, should be reinterpreted as the union of the cell identifier and any other available variable carrying spatial information, e.g., TA or LBS. The use of additional information from the radio interface is still relatively infrequent (see [15] for a prominent example) but could become more important in future 5G deployments. Here, we do not elaborate further this scenario, and maintain the variable i to denote solely the radio cell identifier.  Figure 1. Data processing workflow. The main focus of this work relates to the estimation module, based on a probabilistic model, associated to geolocation methods allowing overlapping cells (upper path in the central box). The workflow for tessellation methods (bottom path) may be considered as a particular, degenerate case of the previous method, where the emission probability matrix P reduces to the identity matrix and therefore can be omitted. Due to the absence of the stochastic component, the estimation module degenerates into a simple area-proportional computation. Radio network data relate to the deployment and configuration of the radio network infrastructure. Such data may be more or less detailed, ranging from merely a set of antenna tower locations to a very detailed set of full radio configuration parameters, e.g., antenna orientation, transmission power, beam width, frequency, etc. Based on the available radio network data, the "geolocation" module derives a set of cell data that aim at binding the radio cells to the spatial territory.

C. Geolocation approaches
The availability of more complete radio network data offers the possibility to adopt more sophisticated geolocation methods in order to produce more detailed cell data. Conversely, if only minimal radio network data are available, there is no choice but to adopt very simple geolocation methods to produce undetailed cell location data. For the sake of generality we need to identify a cell data format that can accommodate the whole range of possibilities, including but not limited to very simple ones. To this aim we introduce the notion of "cell footprint" as explained below.
Let us consider a discretization of the geographical territory in a regular grid where grid units are indexed in j = 1, . . . , J. For a generic cell i at the reference time 4 t * let the nonnegative quantity s ij ≥ 0 encode the degree by which cell i is expected to "cover" the point j in space. The collection of such variables for a single cell i, formally s i def = [s i1 · · · s iJ ] T , represents the "cell footprint". Depending on the assumed geolocation model, the variables s ij 's can be restricted to take binary values, thus leading to on/off flat coverage patterns for each individual cell, or alternatively they could be allowed to take continuous values that capture the strength of the cell i signal around point j -an approach first proposed in [13], [17] and then adopted also in [10], [14].
If cell data are expressed in the form of cell footrpint, then the task of the geolocation module is to determine the set of cell footprints from the available radio network data. Radio cells that have identical footprints can be grouped together, with index i now referring to the whole cell group. There are numerous geolocation methods, each making use of more or less information and with different degrees of sophistication. Every geolocation method has, explicitly or implicitly, an underlying assumption about the process by which a generic mobile phone selects a radio cell to connect. Generally speaking, following [10] we can classify all possible approaches into two large families: • Tessellations: methods that partition the territory into a set of disjoint (non-overlapping) footprints associated to different cells or groups thereof. • Overlapping cells: methods that allow cell footprints to overlap.
The family of tessellation approaches may be seen as a particular (degenerate) case of the more general overlapping cells approach. With tessellations, cell data reduce to polygons or grid coordinates, and the general workflow simplifies as sketched in Fig. 1.

D. Geolocation methods based on tessellations
The majority of earlier studies based on MNO data have adopted tessellation methods relying on the Voronoi partitioning principle. We recall that, for a discrete set of k points (Voronoi seeds) in a bounded planar region, a Voronoi partition divides the region into a set of k non-overlapping sub-regions (Voronoi polygons) whereby each point in space is associated to the closest seed. A Voronoi partition is well defined, in the sense that a single partition (Voronoi diagram) exists for a given set of seeds in the bounded region. However, there are multiple ways of projecting a set of mobile radio cells into a set of Voronoi seeds and multiple ways of bounding a region of interest, and each method results in a different Voronoi diagram for the same input set of cells.
The simplest and by far most popular method takes antenna tower positions as seeds. As, in real networks, multiple radio cells have their antennas co-located on a single antenna tower, the number of unique seeds, hence polygons, is lower than the number of radio cells. This method does not require any information about the radio cell configuration other than the antenna tower position, and is therefore very simple to implement. The implicit modeling assumption underlying this method is that mobile phones always connect deterministically to the closest antenna tower. This assumption ignores several fundamental aspects of mobile communications such as antenna directionality, power control, load balancing and multi-layer radio deployments [16].
Another variant, first considered in [18], [11], [12] and more recently also in [19], places seeds at the barycenter of radio cell coverage area. This requires additional knowledge about the radio cell configuration parameters in case of directional cells (e.g., azimuth orientation, beam width, coverage range) in order to determine, at least roughly, the cell coverage area and then the barycenter.
The method presented by the Belgian operator Proximus in [8] is more articulate. First, they distinguish between large cells (macro and micro cells) and small cells (femto and pico cells) and apply Voronoi partitioning only to the large cells. Second, in order to take into account the directionality of cell sectors, they place N Voronoi seeds in the vicinity of each N -sector antenna (typically N = 3 for 120 • sectors) with a small offset in their respective azimuth directions.
Generally speaking, all variants of the Voronoi approach have the disadvantage that even a single cell change, e.g., addition, removal or shift of its associated seed, produces a change in the neighboring cell footprints and consequently requires a new computation of the entire Voronoi diagram. Departing from the Voronoi approach, other forms of tessellations may be obtained by mapping each cell to one particular unit of a predefined partition, e.g., a regular square grid (as in [20]) or a variable resolution quadtree [21]. Whether based on Voronoi or an independent fixed grid, the more sophisticated variants of tessellation methods require additional information about the radio cell configuration, beyond the antenna location. However, as more detailed cell data are made available for the geolocation process, the limitation of considering non-overlapping footprints seems to be less justified, motivating the interest for methods accounting for the intrinsic overlapping nature of real-world radio cells.

E. Geolocation methods based on overlapping cell locations
This family of methods explicitly takes into account the fact that real-world radio cells overlap by design. To the best of our knowledge, the first work to consider an overlapping cells approach for the problem of density estimation was proposed by Ricciato et al. in [11] and [12] (see also the earlier work [22] for a different application). Therein, the authors consider cell footprints that are "flat", i.e., for a given radio cell i, a point j in space is either included or excluded from its coverage area.
A more sophisticated approach was developed recently by Tennekes et al. in [13], [23] (see also the earlier presentation [17]) and implemented in the mobloc R package [24]. This approach considers a non-uniform profile whereby the parameters s ij 's are continuous and vary within the cell footprint, having a physical interpretation connected to the received signal strength. The propagation model in [13] also uses elevation maps and land use -the latter only to estimate the path-loss exponent.
The determination of the cell footprints s i 's is a critical modeling task: the set of possible implementation solutions is wide, ranging from elementary geometric models as in [11], [12] to more articulate but still parsimonious parametric propagation models as done in [13], and even more sophisticated (and less parsimonious) empirical numerical models that take into account more detailed data about the territory (elevation maps, type of buildings) and about the radio network dynamics (power control, inter-cell interference, etc.), possibly reusing data and tools that are already available and used regularly for radio network planning and optimization tasks.
Regardless of how the cell footprints s i 's are empirically determined, the next step in the modeling process is to encode such information into a probabilistic data generation model that will then serve as basis for the development of an estimation (inference) method. In the next section, we first introduce the probabilistic model, and then explain how this model is linked to the geolocation method of choice in the specific context of MNO data analysis.

F. Probabilistic model
For the rest of this paper, we assume that the geographic territory is discretized into a regular square grid. This assumption enables the use of vector notation and thus simplifies the formalism as well as the software implementation of the considered methods.
We focus on the estimation problem at a given reference time, therefore the temporal dimension can be dropped from the formalism. We resort to the term "tile" to refer to the individual square grid unit, reserving the term "cell" to denote the radio cell constituting the mobile (or cellular) radio network.
Let the jth element u j of the column vector u def = [u 1 · · · u J ] T denote the unknown number of mobile phones in tile j = 1, . . . , J. Let the ith element c i of the column vector c def = [c 1 · · · c I ] T denote the observed number of mobile phones counted in cell i = 1, . . . , I (cell count vector). We denote the total number of phones across all cells by C def = I i=1 c i = 1 T I c where the symbol 1 k denotes a column vector of size k with all elements equal to 1. We denote by p ij ∈ [0, 1] the probability that a generic phone placed in tile j will be detected (counted) in cell i. In other words, p ij represents the conditional probability: The p ij 's are called "emission probabilities" in the field of emission tomography [25] and we retain here the same term. Their value can be instantiated based on the cell data in output from the geolocation model, as explained below in Subsection II-G.
For the sake of a more compact notation we gather the individual probabilities p ij 's into a matrix P [I×J] . The elements of P sum up to one along columns, formally 1 T I P = 1 T J , meaning that the matrix P is column-stochastic.
The (measured) cell count vector c can be interpreted as the single realization of a random vectorc whose expected value is given by: In the estimation problem we must solve for estimand u given the vector of measurement data c, representing the single available observation ofc, and the model matrix P . This being a type of inversion problem, the estimateû can be written in general as:û = g(P , c) where g(·) denotes the estimator of choice. It is evident that the estimand variables must be constrained to be non-negative, i.e., u i ≥ 0, ∀i. However, we relax the constraint that such variables should be integers since the rounding error can be safely neglected vis-à-vis other sources of uncertainty.
In practical deployments, the number of tiles is (much) larger than the number of cells, i.e., J I, and moreover J 1 (for instance, J = 160, 000 and I around 600 in the simulation scenario considered in Section VI). Therefore, even if c were perfectly known, the direct inversion of Eq.
(2) would constitute an under-determined problem. For that reason, the estimation problem in this case is affected by issues of structural non-identifiability, as per the definition given in [26]. Any additional external information that is available to help the estimation process (e.g., prior distributions, spatial constraints derived from geographical maps, known structural properties of the desired solution) can be embedded in the estimator g(·) to resolve, or at least reduce, the ambiguity among multiple solutions, as we will elaborate upon later.
If two generic tiles j 1 and j 2 yield equal emission probabilities for all cells, i.e., p ij1 = p ij2 ∀i, and are associated with the same prior values (in case prior information is used in the estimate) then they are indistinguishable from each other. In such a case, their respective estimand variables u j1 and u j2 are perfectly collinear in the estimation problem and there is no way to resolve differences between them. Therefore, it makes sense to merge both tiles into a single super-tile, and then compute a single estimate for the latter 5 . Algebraically, this corresponds to merging identical columns of matrix P . We refer to this operation by the term "consolidation". Note that the consolidation process does not imply that the resulting (consolidated) matrix is necessarily full rank, i.e., it does not guarantee that the resulting consolidated instance of the problem is fully identifiable.

G. Linking cell data to emission probabilities
The emission matrix P plays the role of a known input parameter to the estimation problem. Its value must be determined from the cell data in output from the geolocation module (ref. to Fig. 1). To this aim, a natural choice is to follow the approach first proposed by [13] and later adopted by other work [14], [9]. This model assumes that a mobile phone in tile j selects cell i with a probability that is proportional to its signal dominance relative to the other concurrent cells in the same tile, formally: with k j denoting the number of concurrent cells at tile j. This general model includes, as a particular case, the approach considered in the earlier work [12] where cell coverage was assumed to be on/off. In this case the variables s ij ∈ {0, 1} are limited to take binary values, hence Eq. (4) leads the non-zero elements to take fractional values p ij = 1/k j , meaning that all k j cells covering tile j have the same probability of being selected.
Tessellations may be seen as a further special case of the particular on/off coverage case above: as each point in space is covered by exactly one and only one cell (k j = 1) the emission probabilities reduce to binary values p ij ∈ {0, 1}. In other words, when cells are mutually non-overlapping, the data generating process becomes deterministic: the binary model matrix P after consolidation reduces to the identity matrix, and the estimation problem becomes trivial as far as no external information is taken into account (such as, e.g., prior information in Bayesian settings as considered in [27]). In this contribution, we focus on the more general (and non trivial) case of overlapping cells, with the understanding that the proposed solution will be applicable also to the special case of tessellations.

III. ESTIMATION METHODS FROM LITERATURE
Whatever approach is chosen to instantiate the model matrix P (input parameter), a suitable resolution procedure is needed to compute the function g(·) in Eq. (3). This is the focus of the remaining part of this contribution. In this section we present three different solutions proposed in recent literature. To the best of our knowledge, no other solution beyond these three was previously considered for the problem at hand.

A. MLE-Multinomial
The method elaborated in [12, Section 5.3] (previously appearing in [11]) derives a Maximum Likelihood Estimator (MLE) based on the hierarchical generative model sketched in Fig. 3(a). In the first layer, starting from a single root pool, the C phones are randomly and independently allocated to the J tiles according to a vector of probabilities µ Clearly, µ is a vector with non-negative elements summing up to unity, i.e., µ ≥ 0 and 1 T J µ = 1. The resulting number of units in the tiles is distributed as a multivariate random vectorũ with Multinomial distribution (denoted by M) of parameters C and µ, i.e., withũ ≥ 0 and 1 T Jũ = C. Therefore by construction the mean value can be written as u def = E[ũ | C, µ] = Cµ. In the second layer (ref. Fig. 3(a)) the units are assigned randomly and independently from tiles to cells according to the emission matrix P . Due to the independence of the random assignments at the two layers -from the root pool to tiles, and from tiles to cells -the random vectorc also has a Multinomial distribution with parameters C and P µ: Thus, the log-likelihood function for an observed value c is derived as (omitting irrelevant terms): wherein we have used the logarithm of the vector notation log T to refer compactly to the vector of element-wise logarithms. Letμ MLE denote the value of the probability vector µ that maximizes the log-likelihood Eq. (7) subject to the constraints µ ≥ 0 and 1 T J µ = 1, formally: By definitionμ MLE represents the Maximum Likelihood (ML) estimate for µ given the observed data c, but still we have to provide a point estimate for u. A natural choice (also taken by Shepp and Vardi in their seminal paper [25] for the method presented in the next subsection) is to take the corresponding expected mean value as the final estimate, formally: where we retain the label "MLE" for simplicity. This equation represents a simple rescaling ofμ MLE by the factor C. Therefore, with a simple variable substitution, the minimization Eq. (8) can be rewritten to deliver the final estimateû MLE directly, leading to the following constrained optimization problem: In the original papers [11], [12] the minimization in Eq. (10) is conducted via standard numerical solvers. No remark is made therein about the uniqueness (or lack thereof) of the solution.

B. MLE-Poisson
Recently, the authors of [28] have considered to apply the ML estimator developed in the field of emission tomography by Shepp and Vardi in [25] to this problem. Like the previous approach, this method is also based on a hierarchical generative model with two layers, as sketched in Fig. 3(b), but now the elements ofũ are modeled as independent Poisson (instead of Multinomial) random variables with parameters ρ def = [ρ 1 · · · ρ J ] T , i.e.,ũ j ∼ P(ρ j ) (P denoting the Poisson distribution). Following the same reasoning that led to Eq. (9), the ML estimate of ρ is taken as the estimate for u (see [25, p. 113]). The log-likelihood function is not given explicitly in [25] but it is proved that the ML estimate can be computed iteratively through an Expectation Maximization (EM) procedure: at the generic iteration m the new estimatê u m+1 j is computed from the previous estimateû m j according to the following formula (see [25,Eq. (2.13)] or, equivalently, The authors of the original paper [25] warn that the initialization point should not contain zero elements, and by default assume a flat (uniform) initial solutionû 0 j = C J , j = 1, . . . , J.

C. A simple estimator based on Bayes' rule (SB)
The simple procedure presented hereafter was adopted in the mobloc R package developed by Tennekes et al. [24] and elaborated in [13] (see also [17]). We shall refer to this method as the "Simple Bayes-rule estimator" (SB for short). Let q ji denote the conditional probability: Note the inversion of the conditioning direction between q ji and p ij defined earlier in Eq. (1). Let denote the prior probability that a single generic unit falls in tile j, before observing the measurement data. Recalling the Bayes rule it follows that Therefore, the estimate in each tile j is computed directly aŝ

IV. INSIGHTS FROM EXISTING ESTIMATORS
In this section we reinterpret previous solutions and present our new results. More specifically, the analytical insight presented in this Section will serve as the basis for the development of a novel estimation method in Section V.

A. The prior vector
All three methods reviewed above provide a point estimatê u in output, based on the model matrix P and observed data c in input. To do so, they all require, implicitly or explicitly, the provision of an "initial point" as input to the computation. The initial point takes either the form of a (stochastic) vector of J non-negative elements summing up to unity, hereafter denoted by α def = [α 1 · · · α J ] T (with 1 T J α = 1 and α ≥ 0) or, equivalently, of its rescaled version a def = Cα (with 1 T J a = C and a ≥ 0). Such an initial vector represents our ex-ante "best guess" about how the mobile units may be spatially distributed, before seeing the data; in this sense, as already noted by Shepp and Vardi in [25] 6 , it represents a sort of prior information, and for this reason we shall refer to α (or equivalently to a) as the "prior vector". Note however the difference between the notions of "prior vector" α (or equivalently a) and "prior probability distribution" P(u; α): the former is a vector of scalars that can serve as parameters for the latter, which is conversely a function of the data u parameterized in α and having the properties of a probability distribution (i.e., taking on positive values and summing up to one). This distinction will become more evident in Section V, where the Bayesian MAP and related estimators are introduced.
The role of the (non rescaled) prior vector α is explicit in the SB estimator Eq. (21) where its elements represent prior probabilities (per tiles). As for MLE-Poisson, the (rescaled) prior vector a serves as the initial point for the iterative procedure Eq. (11) by settingû 0 = a. Similarly, a is the natural starting point for the numerical minimization process in Eq. (10) to compute the MLE-Multinomial estimate. In summary, all three methods rely on a prior vector (rescaled or not) as the initial point for the computation.
A point of caution is needed in determining the value of the prior vector, particularly concerning its zero elements. It can be immediately recognized that setting the initial value α j = 0, or equivalently a j = 0, for a generic tile j will force that tile to maintain a zero value in the final solution. This holds true both for the EM iterative procedure based on Eq. (11), since zeros are stable points of the iteration due to its multiplicative structure, and for the direct computation of SB based on Eq. (21). In other words, with both these methods, namely SB and MLE-Poisson with EM, zeroing the jth element in the prior vector overrides any possible contribution carried by the measurement data for the corresponding tile j. This is equivalent to completely excluding a priori the corresponding tile j from the computation. But if we intend to do so, it is certainly more practical to drop the variable associated with such tile upfront, rather than instantiating a variable whose final value is fixed ex-ante to zero. By eliminating the zero tiles upfront, we can practically assume that all remaining elements in the prior vector take non-zero values.
In practical settings, the prior vector can be instantiated based on land use data obtained by other sources, e.g., satellite imagery or land use surveys as done in [27], [29] in the context of Voronoi tessellations. Such data can give indications about which areas are more or less likely to host humans, and therefore may increase the accuracy of the final estimate. However, translating external data into a meaningful prior information still requires some modeling choices. For instance, whether an outdoor leisure area is more or less likely to attract visitors compared to a densely built area depends upon contextual factors, e.g., weather conditions or calendar day (working day vs. holiday). For this reason, the use of such information should be dealt with judiciously in order to avoid introducing biasing errors.

B. MLE-Multinomial and MLE-Poisson
We show that the two MLE procedures presented earlier in Section III-A and III-B are equivalent in the sense that they yield exactly the same set of solutions. Beforehand, we need a technical lemma summarized in the following Proposition. Proposition 1. The log-likelihood Eq. (7) is concave, but not strictly concave and the constraints are linear, so the overall problem is concave but not strictly concave. As a consequence, all stationary points are equivalent global maxima.
Proof. See Appendix A.
In the following, the equivalence between MLE-Poisson and MLE-Multinomial is established. The solution space of the latter can be obtained by Proposition 1 as the set of stationary points of the Lagrangian function, i.e., the derivative of the objective function in Eq. (10) augmented by the scaled version of the equality constraint. The first-order optimality condition is therefore which can be shown (see Appendix B) to yield, for each j, where p T i denotes the ith row of P . Eq. (18) is identical to the convergence (fixed-point) stability condition of Eq. (11), i.e., u m+1 =û m . More formally, we have the following result.
As an additional result, we can easily recognize that the multiplicative factor appearing in Eq. (11) equals the partial derivative of the objective function in Eq. (10), i.e., In other words, we can reinterpret the EM procedure in Eq. (11) (derived for the Poisson generative model) as a purely multiplicative method to solve iteratively the optimization Eq. (10) (derived for the Multinomial generative model) starting from an initial guess 7 .

C. Maximum Likelihood Bounded Subspace (MLBS)
Recall that the vector c represents a single realization of the random vectorc with unknown mean c. Since no other measurements are available, in the absence of external information the vector c represents a natural estimate of the mean value c. Replacing the unknown term c with its estimate c in the generative relation Eq. (2) leads to the constraint P u = c. It is rather intuitive that, among all possible (nonnegative) values of u, those respecting this constraint, if they exist, are the ones that best conform with the available measurement c and with the model P . This argument leads us to restrict the search for a "good" estimateû within the bounded subspace 7 Such a purely multiplicative procedure in the form u m+1 j = u m j · ψ m j is distinct from the standard gradient descent approach that, in general, involves an additive update in the form u m+1 Note that the constraint P u = c absorbs the condition 1 T J u = C on the total count. Since in our application J I, matrix P is rank deficient and the constraint P u = c admits multiple solutions in the variable u, although it is not guaranteed that all the components of the latter are nonnegative, i.e., MLBS may be empty. For non-empty MLBS, while we have somewhat restricted the search space, we still need to provide a criterion for selecting a single solution within that space unambiguously. This aspect is elaborated later in Section V-C.
Though this way of reasoning is heuristic, it turns out that the bounded subspace defined by Eq. (20) is intimately connected to the MLE-Multinomial and MLE-Poisson procedures. In fact, the condition P u = c can be rewritten as p T i u = c i , ∀i, implying that the optimality condition Eq. (18) is always verified. In particular, the following result holds true.
where diag(v) is a diagonal matrix containing the entries of v (and diag −1 (v) their reciprocals). An alternative form iŝ where denotes the element-wise (Hadamard) product between two vectors, and likewise denotes the element-wise division between two vectors.
Proof. Notice that J k=1 p ik α k and I i=1 ξ i p ij are the ith and jth element of the vectors P α and P T ξ, respectively, where The thesis follows in a straightforward manner by rewriting in vector form ξ = diag −1 (P α)c and exploiting the associative property of the matrix product. Proposition 4 highlights that, owing to its linearity, the SB estimator is simple to compute and, compared to MLE, does not involve an iterative procedure.
It should be noted that SB may fall outside the MLBS (a numerical example is given below in Subsection 5) and therefore SB does not qualify in general as a ML solution.   Proof. It can be immediately verified by comparing Eq. (16) and Eq. (11), withû m j in the latter replacing α j in the former.

E. Graphical illustration for a toy scenario
To illustrate the above results graphically, we present numerical results referring to a simple toy scenario where C = 110 mobile phones are split across J = 3 tiles and connect to In this toy instance there are only two degrees of freedom since the three variables u 1 , u 2 and u 3 are constrained by the total sum 3 j=1 u j = C, or equivalently 1 T J u = 1 T I c. Therefore, we can map all admissible solutions u to an horizontal plane. In the 3D plot of Fig. 4(b) the horizontal dimensions represent u 1 and u 3 , while the vertical dimension reports the value of the likelihood function in Eq. (7). The convexity of the likelihood function (proven in Proposition 1) is graphically evident. Notably, the loci of all global maxima of the maximum function lie on a segment, that is the 1D MLBS of the whole 2D solution space. In the planar plot of Fig. 4(c) the MLBS is marked by blue dots. The flat initial guess (for which u 1 = u 2 = u 3 = 36.7), the corresponding SB estimate from Eq. (16) and the EM solution from Eq. (11) are indicated by markers.

A. Rationale
In this section we derive a novel estimation approach that does not require an iterative procedure and can be solved in closed-form. The derivation starts from formulating the Maximum A Posteriori (MAP) estimator for the problem at hand, which is a typical approach to obtain point estimates combining data and prior information. To the best of our knowledge, this is a novel contribution not presented earlier in previous literature. We find however that the exact MAP estimator is impractical for the problem at hand, since the high computational complexity prevents its adoption in large problem instances. This motivates our effort to devise a novel alternative estimator.
Generally speaking, the MAP estimator combines information from the data, captured by the likelihood term embedding the measurement vector c, with information before the data, captured by the prior probability distribution P(u; α) embedding the prior vector α (or equivalently a = Cα), as discussed earlier in Section IV-A. In fact, the posterior probability distribution is generally composed of the product of the two components, namely posterior = likelihood × prior. The resulting solution is a trade-off, i.e., a compromise between the two sources of information. Notice that also the SB estimator introduced in Section III-C aims at compromising between prior information and data, according to a Bayesian rationale. However, as discussed below, the MAP criterion makes a further step through the maximization of the posterior, which in fact leads to a more sophisticated estimator involving a computationally-intensive numerical optimization, while the SB is a simple (linear) estimator available in closed-form.

B. The MAP estimator
We derive the MAP estimator based on the hierarchical generative model with Multinomial distribution shown in Fig.  3(a). The result is summarized in the following Proposition. Proposition 6. The MAP estimator of u can be obtained aŝ Proof. See Appendix D.
In Eq. (23) the term c T log P u carries the data information from the measurements c while the other two terms carry information from the prior probability distribution P(u; α). More specifically, the term u T log α carries information from the prior vector α (or equivalently a = Cα) which contains the parameters of the prior probability distribution (ref. IV-A), while the term J j=1 log u j ! captures the combinatorial diffusion effect that is intrinsic to the adoption of a Multinomial distribution, and induces a preference for solutions with higher entropy (it can be easily shown that this term is maximized when all elements of u are equal).
The discrete factorial term appearing in Eq. (23) could be replaced by its analytical continuation, i.e., by the Gamma function Γ(n + 1) = n!, leading to an equivalent continuous function that, in principle, can be minimized numerically via standard numerical methods (e.g., gradient descent). However, for very large problem instances, with I and J in the range of tens of thousands, numerical resolution with general purpose solvers might still be too impractical, motivating the derivation of a computationally simpler alternative.
In Section V-C we will present a novel alternative estimator, labelled DF for "Data First", that exploits the particular structure of the problem at hand and yields a closed-form analytical solution. In order to provide additional insight, we will also elaborate on the relation and conceptual difference between the new DF estimator and the classical MAP estimator derived above. To this aim, it is convenient to derive an approximated version of the MAP estimator. The approximation relies on the multivariate normal approximation of the (prior) multinomial distribution, yielding the following result (details can be found in Appendix E). Proposition 7. The MAP estimator in Eq. (23) can be approximated by solving the following simpler optimization problem: Proof. See Appendix E.

C. A "Data First" approach
In Section IV we have established that all points within a non-empty MLBS attain the same maximum value of the likelihood function. In other words, the estimation problem is structurally non-identifiable, following the definition given in [26], in the sense that it is not possible to pick a unique solution based solely on the information contained in the data c. As all points within the MLBS conform to the data equally well. In order to select a particular point within the MLBS, or at least restrict to a subset of preferred solutions, we must necessarily resort to some additional assumption and/or to external information.
One possibility is to demand that, in addition to conforming to the measured data, the solution shall conform also to some particular structural property, characterizing the physical process we aim to measure, e.g., in the form of a smoothness criterion, minimum gradient between adjacent tiles, etc. A second approach is to resort to some kind of prior information to aid the solution selection process. Both approaches amount to adding an additional component to the objective function to be minimized, and therefore may be interpreted as different forms of regularization. For our specific application, dealing with human distribution in space, there are no obvious structural constraints tied to the physics of the underlying process, and it appears more natural to encode external information in the form of prior information (as done, e.g., in [27]).
In principle, a possible way to account for prior information (however determined) is through the MAP estimator derived in the previous section. As discussed, the MAP solution strikes a balance between the information from the data, captured by the likelihood term embedding the measurement vector c, with information before the data, captured by the prior distribution embedding the "prior vector" α (or equivalently a). Likewise for other Bayesian estimators, as more data (more samples) are available the solution component driven by the data increases its relative importance and eventually dominates over the prior information. On the opposite direction, when the data are scarce, the prior information may dominate. In the extreme situation that only a single sample measurement is available, as is specifically the case at hand in our application, adopting a MAP estimator involves a certain risk of diminishing the relative weight of the measurement information to the point that it almost vanishes. In other words, as with MAP (exact or approximated) we cannot control explicitly how much weight to put on the data vis-à-vis the prior, there is a certain risk of ending up with a solution that reflects mostly the prior information, only slightly perturbed by the measurements.
To avoid this undesirable effect, in the following we develop an alternative heuristic procedure for the problem at hand based on the "Data First" principle, where in-data information (likelihood) is given priority over other off-data knowledge (prior distribution and/or any other structural property).
The goal expressed above is achieved though an estimator built according to the following structure: with f (u, a) denoting a distance function between vectors u and a. With this structure, the ML property is imposed as a hard constraint. In fact, the condition forces the solution to lie within the MLBS that is by definition the locus of all ML points, i.e., the points that best conform to the measured data c. Among these points, then we select the point that additionally best conforms with the off-data knowledge encoded in the prior vector a, through the minimization of some distance function f u, a . Among many possible choices for the distance function, opting for the weighted 2 -norm defined in Proposition 7, i.e., with A def = diag(a), allows us to establish a direct connection between the newly proposed estimator and the approximate MAP form derived in Eq. (24), as elaborated below. Plugging Eq. (26) and Eq. (27) into the structure leads to the following novel estimator The solution defined by Eq. (28) is not available in closed-form and may not even exist if MLBS is empty. However, it provides the basis for an heuristic estimatorû DF , labeled DF for "Data First", based on a suitable relaxation of (28). Remarkably, this estimate can be always computed conveniently in closed-form, even in case of empty MLBS.

Proposition 8.
A solution for a relaxation of (28) is given bŷ which can be rewritten aŝ or, alternatively, aŝ (maximum intended element-wise), with and defined as in Proposition 4.
Proof. See Appendix F.
The idea of DF stems from the fact that the structure of the solution to Eq. (28) can be relaxed to make it independent of the Lagrange multipliers, thanks to the sparsity of P (details in the proof). By doing so we obtain the intermediate pointǔ given in Eq. (32). Such a point, however, may violate the total mass constraint 1 T J u = C due to clipping the negative values to satisfy the non-negativity constraint on u. In a second step, we obtainû DF by applying toǔ the transformation in Eq. (11), which we had already encountered in the EM procedure, which has the effect of redistributing the mass between non-zero elements of the input vector so as to guarantee the fulfilment of the total mass constraint by the output vector, i.e., 1 T Jû DF = C. Notice that, while MLE/EM is computed through multiple iterations, the DF estimator can be computed directly. A careful look at Eq. (32) reveals that the heaviest computation part depends only on the model matrix P and on the prior a.
In fact, setting F def = AP T (P AP T ) −1 and g def = (F P − I J )a, Eq. (32) can be rewritten aš where the maximum function is intended element-wise. Notice that F can be efficiently computed as F = A 1 2 (A 1 2 P T ) † , where (·) † denotes the Moore-Penrose pseudoinverse of the matrix argument. Moreover, both matrix F and vector g are independent of the measured data c. This turns out to be useful in scenarios where multiple estimates must be computed with different measurement vectors but for the same model matrix P , corresponding to real-world situations where the radio network coverage pattern, hence cell footprints, may be assumed to remain unchanged between measurement times while mobile phones may have moved, since in such cases F and g must be computed only once.

VI. NUMERICAL RESULTS
In this section we present numerical results comparing the performance of the different estimators for a sample synthetic scenario. Testing on synthetic data has two important benefits: (i) it allows one to control explicitly the data generation parameters and therefore to assess the sensitivity of results to said parameters, and (ii) it allows one to quantify the absolute estimation error against a known "ground truth".
All the numerical results reported in this section where produced by a set of programs developed in R language. The whole code is made available open-source in the form of an online notebook 8 in order to enable independent replication of results and reuse of all implemented functions in follow-up work by other researchers.

A. Scenario
We briefly describe here the data generation scenario, referring the interested reader to the open-source notebook referred above for any further details. The reference area covers a total of 1,600 square kilometers and is divided into a regular square grid of 400 × 400 = 160,000 tiles of size 100 m × 100 m each.
The Ground Truth Population (GTP) of mobile phones was generated based on publicly available official census data for the city of Munich, Germany, and its immediate surroundings 9 . The total population was reduced by a factor of 1/3 to mimic the mobile customer basis of a single MNO with that market share. The resulting GTP distribution is shown graphically in Fig. 7(a).
The radio network topology was generated synthetically based on the mobloc tool [24] developed by M. Tennekes and used already in other studies [28], [31], [14]. The network generation model is completely parametric and its modular implementation allows to conduct empirical analysis of sensitivity to scenario parameters in future follow-up work.
The radio network considered in this study was designed in order to mimic the multi-layer nature of real-world mobile networks [16]. In fact, radio access networks are typically deployed in an incremental way, with a first layer of large cells (also called "umbrella cells" or macro cells) to ensure total coverage, and additional layer(s) of smaller cells deployed subsequently in order to increase capacity in selected areas and/or fill residual coverage gaps from the previous layer(s) 10 . Following the same rationale, our synthetic network consists of three distinct cell layers, namely "macro", "meso" and "micro" layers, with different sizes and densities of radio cells.
For each layer, antennas are placed according to a semiregular hexagonal pattern with superimposed random jitter so as to mimic the irregular placement of antenna towers in real-world deployments while still retaining a certain degree of uniformity in tower density. The latter condition ensures that the spatial distribution of antennas, hence the spatial pattern of radio coverage, remains independent from, and neutral to, the spatial distribution of mobile phones across the considered region. In other words, we are neither introducing an explicit matching nor an explicit mismatching between cell density and population density -an aspect that might advantage or disadvantage some estimator vis-à-vis the others. In the considered network deployment there are a total of 204 antenna locations -27, 156 and 21 respectively for macro, meso and micro cell layer -placed at the locations shown in Fig. 5.
Every antenna carries a triplet of 120 • sector cells oriented in different azimuth directions. The signal propagation model implemented in mobloc follows a simple geometric model with configurable parameters. For a generic tile j, the received signal strength from a generic cell i is computed at the tile center and from there the so-called "signal dominance" value s ij is derived. If its value falls below a minimum threshold (set to 0.05 in our simulations) then tile j is considered to fall outside the coverage area of cell i. All cells for which the signal dominance value exceeds the minimum threshold "cover" tile j and compete to serve the mobile phones therein. If the tile is covered by multiple radio cells, then each mobile phone selects the serving cell independently from other phones and with probabilities that are proportional to the signal dominance values of the competing cells in that tile, as modelled by Eq. (4).
In our scenario the antenna and cell parameters are set to layer-specific values for height, power, azimuth orientation, path loss exponent, etc. In Fig. 5 we show the contour of the cell coverage areas for three sample antennas -one for each layer. It should be clear from this figure that the coverage areas of each cell may overlap (i) with other cells mounted on the same antenna; (ii) with other cells mounted on other neighboring antennas from the same radio layer; and (iii) with other cells mounted on other neighboring antennas from different radio layers. 10 A typical mobile network nowadays consists of the superposition of multiple radio access technologies (RAT) operating at different frequencies, including 2G (GSM 900 and GSM 1800), 3G, 4G and prospectively also 5G. The deployment of each new RAT involves the addition of further layers to the overall radio coverage.

B. Estimators
In our simulation study we have compared the performances of six different density estimation approaches: three variants of Voronoi tessellations and three different estimators for overlapping cell locations.
The following Voronoi tessellation methods were considered, resulting in the Voronoi diagrams shown in Fig. 6: • Vor-T -Voronoi tessellation with one seed for each antenna tower. This simple method represents the standard approach in most previous literature. • Vor-O -Voronoi tessellation with one seed for each radio cell placed at a small fixed distance (10 m) from the respective tower location in the azimuth direction. This method, first proposed by [8], is the simplest way to take into account the azimuth orientation of directional cells. • Vor-B -Voronoi tessellation with one seed for each radio cell placed at the barycenter (mean point) of the signal dominance profile for the given cell. Furthermore, we have implemented and compared the following three estimators for overlapping cells. All these estimators are instantiated with a prior vector a and a model matrix P . For this study we have considered a non-informative uniform prior vector, with all equal elements. The model matrix was assumed to match exactly the emission matrix used in the data generation process.

C. Similarity measure
In order to evaluate the performance of a generic estimation method relative to competing candidates we must resort to some measure of similarity w(û, v) between the estimated densityû and the ground truth density v. (c) Vor-B Figure 6. Voronoi diagrams for the three tessellation options.
Euclidean ground distance. KWD (defined formally below) is known in the literature with several different names: Earth Mover Distance, Mallow distance, etc. (see [32] and references therein). KWD is being increasingly considered in various scientific fields as an alternative to more traditional probabilistic measures like, e.g., Kullback-Leibler divergence, Hellinger divergence, Pearson divergence. These measures belong to the class of f -divergences [33] and are built from the differences between the distribution values at individual tiles, with no consideration for their spatial positions. Such tile-by-tile view does not capture the spatial structure of the distribution domain, and therefore misses completely the effects of "horizontal" spatial errors and physical proximity (or lack thereof) between distribution masses. Instead, all these aspects are taken inherently into account by KWD 11 .
The limitations of f -divergences are shared also by the socalled "Euclidean distance" between distributions, defined as 2 (J is the number of elements, i.e., the total number of tiles), for which the input elements u, v are seen as points in a J-dimensional abstract space rather than distributions of size J in a two-dimensional physical space, leading to the curious paradox that adopting the Euclidean distance between distributions in the abstract space effectively disregards the Euclidean distance between tiles in the physical space. Instead, the latter is central to the KWD definition and motivates its adoption for the present study.
For two input distributions u and v defined over the same spatial grid J and with same total mass 1 T J u = 1 T J v = C, KWD may be interpreted as the minimum cost of transporting the mass from configuration u to v (or vice-versa) when the cost of transporting a unit of mass between two generic tiles j and k equals the (physical) ground distance d kj between them. KWD can be expressed as the solution w of the following 11 The difference between f -divergences and KWD is brightly explained in terms of "vertical" versus "horizontal" similarity in https://jeremykun.com/ 2018/03/05/earthmover-distance.
Linear Programming (LP) optimization problem: 34) It can be easily shown that KWD defined in this is symmetric and fulfills the triangular inequality, therefore qualifies fully as a "distance" in the formal mathematical sense.
The direct resolution of the LP problem in Eq. (34) is computationally expensive, preventing the computation of exact KWD values on very large grids. In a recent work Gualandi et al. [32] showed that a close approximation to the exact KWD value, within a provable deterministic bound, can be obtained by solving a transportation problem over a regular lattice. In the proposed solution, the integer parameter L determines the lattice density and acts as a tuning knob to trade-off computation resources (time and memory) with approximation error: they show that L = 3 is sufficient to achieve a KWD approximation that is guaranteed to be within 1.2% of the exact KWD value in all scenarios. An efficient opensource implementation of their method is publicly available along with code wrappers for R and Python 12 . All KWD values presented in the present study were obtained with the R package SpatialKWD available in CRAN 13 . Unless differently specified we retained the default parameter setting L = 3.
A physical interpretation of KWD w(û, v) in our context is the following. Imagine that, starting from the estimated densitŷ u, we move each unit of mass (or mobile phone) across tiles so as to arrive at the final GTP configuration v (transportation plan), and we do so in a way that minimizes the total travelled distance (optimal transportation plan). Some units will not travel, while others will travel over shorter or longer distances. On average, a single unit will travel a distance equal to the KWD value w. In other words, since KWD is normalized 12   to the total mass (note the factor 1 C in Eq. (34)) its value can be interpreted as the "average distance" in the optimal transportation plan between GTP and the estimated map (or vice-versa), and therefore as the "average spatial error" of the estimated density against the ground truth.

D. Results
In Fig. 7 we report the density maps obtained with the six different estimation methods along with the ground truth population. The corresponding KWD values between the final estimates and GTP are reported in Fig. 8.
Recall that for SB, ML/EM and DF the prior vector was set to the non-informative uniform vector with all equal elements (flat prior). In the considered scenario the KWD value between the flat prior and GTP was 72.6 (average error of 7.6 km). This represents a naive absolute reference for assessing the accuracy gain of the estimation process. Even the simplest Vor-T estimator brings the KWD down to 10.0 (average error of 1 km), with a reduction factor of ×7 from the flat prior reference. This confirms the (not entirely obvious) expectation that even the simplest density estimation approach is "better than nothing". If cell azimuth orientation data is available, in addition to tower locations, one can implement the slightly more sophisticated Vor-O variant. This brings the KWD score down to 8.7 (average error 870 m), i.e., a 15% improvement upon the simpler Vor-T option.
If further detailed knowledge about the signal dominance values is available, then one could use this information within a Voronoi-based approach by setting the seeds to the exact cell barycenter. This is, in a nutshell, the Vor-B variant. One would expect Vor-B to perform better than Vor-O, since it uses more detailed information about the cell coverage area (beyond merely tower locations and azimuth orientation) and in fact the KWD goes further down to around 6 (average error 600 m).
But if such detailed information is available, then one could also resort to the other three estimators that were designed specifically for overlapping cells. While the simplest SB estimator performs comparably to Vor-B in the considered scenario, a more substantial improvement can be obtained with DF and MLE: both methods bring the KWD below 3 (average error below 300 m) with a slight advantage of MLE/EM that yields an average error around 240 m, i.e., a factor of ×4 improvement over the most popular Vor-T method.
The price to pay for the higher accuracy of DF and MLE/EM vis-à-vis the other schemes is a slightly higher computational complexity. However, for both DF and MLE/EM the computational burden is absolutely sustainable. Our naive R implementation took less than 4 seconds to compute the closed-form DF estimate for the considered scenario on a low-end laptop, and further optimizations are possible.  the iterative EM procedure, our tests show that the number of iterations required to reach a good solution is not very high, in the order of tens or a few hundreds (see Fig. 9) and, most importantly, each iteration is relatively straightforward to compute due to the simple structure of Eq. (11). Our naive, single-threaded implementation took less than 1 second to complete one iteration for all 160,000 tiles in the considered scenario. Since Eq. (11) must be evaluated once for every super-tile at each iteration, the overall computation time scales linearly with the the number of super-tiles 14 . With further code optimization, parallelization and additional hardware resources the computational cost of MLE/EM is acceptable even for much larger instances.
Finally, we provide some insight into the convergence behaviour of the iterative MLE/EM procedure. Fig. 9 plots the KWD (against GTP) of the output of the MLE/EM procedure after n iterations for different values of n (note the logarithmic scale) when the starting point is set to the non-uniform flat prior. Recall from Proposition 5 that the outcome after the first iteration (n = 1) is equivalent to SB. One additional iteration (n = 2) is sufficient to halve the average error. The speed of improvement slows down and basically stabilizes after the first n = 100 iterations. Fig. 9 also reports the value obtained with the DF estimator from Eq. (31). Recall that the DF solution can be computed directly, with no iteration. However, the DF solution can also be used as an alternative initialization point for the iterative EM procedure. During the study we wondered whether such a different initialization strategy could eventually lead to a better final solution, and for this reason we report in Fig. 9 also the KWD values obtained by the iterative EM procedure when the initial point is set to the DF solution. However, we found that the solution accuracy after n = 200 iterations remains basically unchanged in the considered scenario.

VII. CONCLUSIONS AND FUTURE WORK
The analytical and numerical results presented in this study indicate that, when the model matrix P is known accurately, the best estimates are obtained with MLE/EM and DF and that the accuracy gain with respect to the more popular Voronoi method is substantial, by a factor of ×4 for Vor-T (based on tower location data) and ×3 for Vor-O (considering also cell direction data) when quantified through KWD. A substantial improvement, by a factor of ×2.5, is also in place against the SB estimator.
The gain in accuracy of the MLE/EM and DF estimators comes at the price of a relatively contained computation cost, thanks to the relatively simple structure of the iterative Eq. (11) for MLE/EM and the availability of a closed-form solution for DF. Our simple single-threaded implementation solves an instance of 160,000 tiles on a low-end commercial laptop in about 3 minutes for MLE/EM with n = 200 iterations, and less than 4 seconds with DF. Early tests on very large instances up to 2,000,000 tiles (not reported in this study) indicate that computational cost would not be a critical factor for a professional implementation.
We remark that the MLE/EM solution is analytically equivalent to the MLE solution developed independently in [12], but the resolution procedure derived in Eq. (11) is much simpler to implement and faster to compute than resorting to generalpurpose solvers as done in [12].
The main limitation of all Voronoi approaches is the implicit assumption, inherent to the adoption of a tessellation approach, that mobile phones always connect to the nearest cell tower. This assumption is problematic in real-world scenarios due to several features of cellular radio planning: multi-layer radio coverage design, mixing of small/large cells, dynamic power control and load balancing mechanisms, etc. On the other hand, the simplest Voronoi variants Vor-T and Vor-O require very little information about the radio network, and therefore remain appealing for practical applications that can tolerate lower levels of spatial accuracy.
Conversely, if spatial accuracy is at premium, efforts should be invested in acquiring more detailed information about radio network coverage in the form of detailed cell footprint profiles s ij 's, based on which more accurate predictions of emission probabilities p ij 's can be derived.
In between the "low-information low-accuracy" methods, namely Vor-T and Vor-O, and "high-information highaccuracy" methods, namely MLE/EM and DF, our results indicate that little room is left for intermediate approaches like Vor-B or SB. The latter seem to inherit the weaknesses of the other two classes: they require very detailed information in order to be instantiated, comparable to that needed for MLE/EM and DF, but they fail to make good use of such information and end up with a level of accuracy that is only slightly better than the much simpler Vor-T and Vor-O methods.
The above results contribute to advance the understanding of the relative advantages and costs of the recently proposed geolocation methods based on overlapping cells vis-à-vis the more traditional tessellation methods. At the end of the day, in all practical applications the choice between one approach or the other is a matter of balancing benefits and costs. On the cost side, based on the present study we can claim that the dominant factor is not related to computation resources (and time) but rather to the efforts required to acquire detailed data about cell coverage profiles. On the benefit side, our simulation framework allows to assess the maximum possible gain that can be expected for different network deployment scenarios under the ideal assumption that the emission probability matrix P is known precisely.
The natural next step in the research is to investigate the sensitivity of the final solution to uncertainty in model parameters p ij 's, or in other words assess the robustness of estimators to model mismatching errors. Another distinct but closely related direction of research should look at assessing the impact of different scenario parameters on the relative performances of the various estimators. Finally, another direction for further exploration concerns the effect of informative priors, both in terms of potential gain or loss of accuracy that may be yielded by considering correct or incorrect informative priors, respectively. Answering these open research questions is part of our planned follow-up work. Angelo Coluccia (M'13-SM'16) received the PhD degree in Information Engineering in 2011, and is currently an Associate Professor of Telecommunications at the Department of Engineering, University of Salento (Lecce, Italy). He has been a research fellow at Forschungszentrum Telekommunikation Wien (Vienna, Austria), and has held a visiting position at the Department of Electronics, Optronics, and Signals of the Institut Supérieur de l'Aéronautique et de l'Espace (ISAE-Supaero, Toulouse, France). His research interests are in the area of multi-channel, multi-sensor, and multi-agent statistical signal processing for detection, estimation, localization, and learning problems. Relevant application fields are radar, wireless networks (including 5G and beyond), and emerging network contexts (including intelligent cyber-physical systems, smart devices, and social networks). He is Senior Member of IEEE and Member of the Technical Area Committee in Signal Processing for Multisensor Systems of EURASIP (European Association for Signal Processing).
Recalling Eq. (9) we bind the estimateû to equal the rescaled value ofμ, i.e., we imposeû = Cμ. With this simple variable substitution, from Eq. (44) we obtain the following MAP estimator 17 : c T log P u + log P(u; α) which can be rewritten to finally yield the thesis (note that the term α can be replaced by a = α/C for the same invariance property recalled in the previous footnote).

E. Proof of Proposition 7
We start by approximating the (prior) Multinomial distribution with a multivariate normal P(u; α) = M(C, α) N (a, Σ a ) with mean value u = a = Cα and covariance matrix Σ a composed of the following elements: (48) In our application we are considering a very large number of tiles (J 1) which implies that the individual probabilities are very small in absolute terms (α j 1). This justifies the further simplification of the covariance matrix: neglecting the product terms in the off-diagonal elements, Cα m α n 0, and approximating the diagonal elements with Cα m (1 − α m ) Cα m = a m , the covariance matrix reduces to the diagonal matrix A def = diag([a 1 · · · a J ]) and its inverse to Σ −1 a A −1 = diag([ 1 a1 . . . 1 a J ]). In this way the approximate MAP estimator rewriteŝ u A.M. ≈ arg max (49) which finally yields the thesis.
(50) 17 In the derivation we have used the following invariance property: a constant factor κ in the argument of the logarithm terms does not influence the solution to the maximization. Formally: arg max x,y {log x κ + y} = arg max x,y {log x − log κ + y} = arg max x,y {log x + y}.
The calculation of the gradient in the first line yields The vector of the Lagrange multipliers λ corresponding to the equality constraint can be explicitly obtained by exploiting the constraint itself, that is