UDDSketch: Accurate Tracking of Quantiles in Data Streams

We present UDDSketch (Uniform DDSketch), a novel sketch for fast and accurate tracking of quantiles in data streams. This sketch is heavily inspired by the recently introduced DDSketch, and is based on a novel bucket collapsing procedure that allows overcoming the intrinsic limits of the corresponding DDSketch procedures. Indeed, the DDSketch bucket collapsing procedure does not allow the derivation of formal guarantees on the accuracy of quantile estimation for data which does not follow a sub-exponential distribution. On the contrary, UDDSketch is designed so that accuracy guarantees can be given over the full range of quantiles and for arbitrary distribution in input. Moreover, our algorithm fully exploits the budgeted memory adaptively in order to guarantee the best possible accuracy over the full range of quantiles. Extensive experimental results on synthetic datasets confirm the validity of our approach.


Introduction
A data stream σ can be thought as a sequence of n items drawn from a universe U. In particular, the items need not be distinct, so that an item may appear multiple times in the stream. Data streams are ubiquitous, and, depending on the specific context, items may be IP addresses, graph edges, points, geographical coordinates, numbers etc.
Since the items in the input data stream come at a very high rate, and the stream may be of potentially infinite length (in which case n refers to the number of items seen so far), it is hard for an algorithm in charge of processing its items to compute an expensive function of a large piece of the input. Moreover, the algorithm is not allowed the luxury of more than one pass over the data. Finally, long term archival of the stream is usually unfeasible. A detailed presentation of data streams and streaming algorithms, discussing the underlying reasons motivating the research in this area is available to the interested reader in [13].
In this paper we are concerned with the problem of accurately tracking quantiles in data streams. The difficulty is strictly related to the underlying nature of the input data stream, since it is a well-known fact that computing exact quantiles is impossible without storing all of the data [11]. Therefore, approximate solutions such as those provided by sketches are the only viable possibility.
Formally, given a multi-set S of size n over R, let R(x) be the rank of the element x, i.e., the number of elements in S smaller than or equal to x. Then, the lower (respectively upper) q-quantile item x q ∈ S is the item x whose rank R(x) in the sorted multi-set S is ⌊1+q(n−1)⌋ (respectively ⌈1+q(n−1)⌉) for 0 ≤ q ≤ 1. By definition, x 0 and x 1 are respectively the minimum and maximum element of S , and x 0.5 is the median.
Regarding tracking accuracy, it can be defined in two different ways, as follows. Definition 1. Rank accuracy. ∀ item v and ǫ, return an estimated rankR such that |R(v) − R(v)| ≤ ǫn. Definition 2. Relative accuracy.x q is an α-accurate q-quantile if |x q − x q | ≤ αx q for a given q-quantile item x q ∈ S . A sketch data structure is an α-accurate (q 0 , q 1 )-sketch if it can output α-accurate q-quantiles for q 0 ≤ q ≤ q 1 .
Even though for long time research efforts have been focused on data structures providing rank accuracy, data sets with heavy tails are such that rank-error guarantees can return values with large relative errors. In particular, rank accuracy is not viable for tracking higher order quantiles of heavy-tailed distributions.
DDSketch (Distributed Distribution Sketch) [10] is a recent sketch data structure providing relative accuracy for tracking quantiles in data streams whose underlying distribution is heavy-tailed. This sketch is conceptually very simple and can be implemented either using an unlimited number of buckets or fixing a desired maximum number of buckets to be used. In the former case, the space used may grow unbounded, whilst in the latter case when the current number of buckets in the sketch exceeds the predefined maximum a bucket collapsing procedure must be executed in order to guarantee that the number of buckets is always bounded from above.
Unfortunately, the authors of DDSketch do not provide formal guarantees on the estimation's accuracy for a collapsed sketch when the input data is not drawn from sub-exponential distributions.
In this paper we introduce and discuss a novel collapsing strategy for DDSketch. The main contributions of this paper are the following ones: (i) we formally model the relationship between accuracy and space occupied by the sketch for arbitrary input distributions; (ii) our algorithm fully exploits the budgeted memory adaptively in order to guarantee the best possible accuracy over the full range of quantiles.

Related Work
The problem of quantile computation has been extensively studied in the scientific literature, there are indeed several publications about it, with algorithms characterized by very different approaches. The common goal is to provide the most accurate result possible with the minimum use of resources.
The first works for the determination of a quantile sketch date back to the 80's when Munro and Paterson [12] demonstrated the first quantile sketching algorithm with formal guarantees. They proved the relationship between the amount of space needed related to the number of steps required to select the highest order statistical k-th on a dataset of N elements.
Munro and Paterson designed a probabilistic algorithm to estimate the median by keeping s samples out of the N. If the data are presented in random order and s = Θ(N 1 2 ), then the algorithm has a high probability of storing samples containing the median. This algorithm can be adapted to find a specific quantile. The main result obtained is the proof that the amount of memory required by a deterministic p-pass selection algorithm is Ω(N 1 p ). For a data stream, where only one-pass is allowed, i.e. p = 1, the computation of the exact value of any quantile requires Ω(N) memory space. This result led subsequent work to focus on algorithms providing approximate quantile values.
A common technique used in practice for the selectivity estimation problem, is to maintain histograms of frequency, that is buckets containing groups of values that approximate the true value and its frequency according to the statistics maintained by each bucket. Gibbons et. al [5] presented two fast and efficient procedures for maintaining two classes of histogram: equi-depth histograms and compressed histograms. In the equidepth histogram, the elements are grouped into buckets so as to ensure the same number of elements for each of them (same height). In the compressed histogram, the n highest frequencies are stored into n separate buckets, the rest of the elements are partitioned according to the equi-depth histogram. An equi-depth histogram approximates the exact histogram by relaxing the requirements on the number of elements in the bucket and counting accuracy. Its distance from the real histogram can be measured by the following error metric. Consider an approximate equi-depth histogram with β buckets for N elements, the error metric µ ed is the standard deviation of the buckets sizes from their average, normalized with respect to the average of the buckets sizes. The variant with compressed histograms is also treated in a similar way, obviously with the necessary modifications to adapt the algorithm to that class of histograms.
These algorithms provide a summary of the data using histograms and can be used to estimate quantiles according to a different error metric, however, they need to perform multiple passes on the whole input dataset.
Manku et al. [8] designed an algorithm whose accuracy bound is independent from the input distribution and the approximation error is uniformly distributed over all quantiles. The algorithm uses b buffers which store k elements each. Each buffer B is associated with a weight w B which represents the occurrences of the input items fallen in the buffer. When the algorithm starts, all buffers are empty and they are populated with the elements from the input dataset; when all of the buffers are full, the collapsing procedure applies, modifying the weight of the collapsed buffer accordingly. The authors proved that the error ǫ committed on the estimation of the q-quantile is bounded and the space required to guarantee the error bound is O( 1 ǫ log 2 ǫN). The algorithm is 2 efficient and offers opportunities for parallelism, however, it requires to know in advance the size N of the dataset which makes the algorithm not suitable for data streams processing.
Summarizing large datasets is important because of limited memory resources. The GK sketch algorithm by M. Greenwald and S. Khanna [6] addressed the problem of designing a space-efficient algorithm based on quantile summaries. A summary consists of a small number of items sampled from the input sequence. These items are then used to respond to any quantile request. The algorithm provides an ǫ-approximate estimate r ′ q of the q-quantile. A summary is ǫ-approximate if the estimate of a q-quantile differs from the exact value r q by |r ′ q − r q | ≤ ǫN. The algorithm requires memory space O( 1 ǫ log(ǫN)) and it is independent by the input distribution. GK Sketch offers excellent results in terms of approximation and space used, however it is not fully mergeable, which makes it impossible to use it in a distributed setting, moreover the memory required depends on the size of the input dataset which is not known when processing data streams.
Summarizing distributions which have high skew using uniform quantiles is not informative because having a uniformly spread-out summary of a stretched distribution does not describe the interesting tail region adequately. Motivated by this, Cormode et al. [2] designed an algorithm to efficiently estimate the highbiased quantiles. The high-biased quantiles are defined as: 1 − φ, 1 − φ 2 , · · · , 1 − φ k with 0 < φ < 1. The algorithm keeps information about particular items from the input, and also stores some additional tracking information. The intuition for this algorithm is as follows: suppose we have kept enough information so that the median of a dataset with N elements can be estimated with an absolute error of ǫN in rank. Now suppose that there are N more insertions of items above the median, so that this item is pushed up to being the first quartile. If the same absolute uncertainty of ǫN is maintained, then this corresponds to a relative error of size ǫ/2, considering that the number of items is doubled. Inspired by the GK algorithm, Cormode et al. provided an algorithm which is able to support greater accuracy for the high-biased quantiles.
The Moment Sketch algorithm of E. Gan et al. [4] is based on a data structure defined as moment sketch. The sketch requires a minimal amount of space and it is mergeable and computationally efficient. The authors use the moments methods to build the f (x) distribution function which can be used to describe the input dataset. Letting k be the highest power used for the moments, the moment sketch of a dataset D includes: the minimum value x min ; the maximum value x max ; the number of items n; the sample of moments µ i = 1 n x∈D x i for i ∈ {1, · · · , k}; the logarithmic moments ν i = 1 n x∈D lg i (x) for i ∈ {1, · · · , k}. To estimate a quantile from a moments sketch, the moments method is applied to build the PDF f (x) whose moments match those stored in the sketch and that maximizes the entropy defined as is then used to estimate the quantiles of the dataset. The moments sketch proves to be very fast, with an average error of less than 0.01 using about 200 bytes of space. However, there could be pathological situations with certain distributions for which it is not possible to compute finite moments. Moreover, the error is guaranteed in the average case, but not in the worst case, and errors caused by floating point multiplications can occur.
The work done by T. Dunning and O. Ertl [3] has introduced a new data structure known as t-digest, formed by clustering real value samples. This structure differs from the previous ones in several ways: the data are grouped and summarized in the t-digest structure, however the range of data included in different clusters may overlap; the buckets are represented by a centroid value and a weight value that represents the number of samples contributing to the bucket, instead of the classic lower and upper limits; the samples are accumulated in such a way that only a few of them help determining extreme quantiles, so that relative error is bounded instead of maintaining constant the absolute error. The accuracy of the q-quantile estimate is near to q(1 − q). In this algorithm the accuracy depends on the quantile and is more accurate for computing quantiles close to 0 and 1.
With this work the authors provide a solution to the problem of quantile computation on data streams.
Given a set of elements x 1 , · · · , x n , the quantile x is the fraction of elements in the stream such that x i ≤ x, i.e., the rank of x denoted by R(x). A data structure is accurate for all quantiles if for each x, lettingR(x) be the estimated rank of x, with probability 1 − δ is holds Z. Karnin, K. Lang and E. Liberty [7] designed their algorithm as a reinterpretation of the work of [1] and [9] from a different point of view. The algorithm is based on the concept of a compactor, a data structure that can store k elements all with the same weight w, and if necessary can compact its k elements into k/2 elements of weight 2w in the following way: items are sorted, then odd (respectively even) items are selected and the non-selected even (respectively odd) items are discarded, and the weight w of each selected item is doubled. Each compactor will eliminate odd or even items with equal probability. The rank estimation after this process depends at most on w. The output elements of a compactor are put into another one and so on, and since each compactor has half of the elements in the sequence there will be at most H ≤ ⌈lg(n/k)⌉ compactors chained together creating a hierarchy with variable capacity. Considering an algorithm run ending with H different compactors the theorem proved by the authors states that there is a streaming algorithm that calculates a ǫ approximation for the rank of a single item with probability 1 − δ whose space complexity is O((1/ǫ) lg(1/δ)). Moreover, there is another streaming algorithm that produces mergeable summaries and computes an ǫ approximation for the rank of a single item with probability 1−δ whose space complexity is O((1/ǫ) lg(1/δ) + lg(ǫn)). An additional optimization guarantees the rank computation of a single element with 1−δ probability and with a space complexity of only O((1/ǫ) lg lg(1/δ)) for the non-mergeable version and O((1/ǫ) lg 2 lg(1/δ)) for the mergeable version. The algorithm provides a randomized solution to the problem of computing quantiles on data streams with a probability of error of 1 − δ and a minimum amount of space used, ensuring the property of full mergeability. However, the algorithm provides estimates with a greater relative error for the high quantiles on heavy-tailed data.

DDSketch
A basic version of DDSketch, described in [10], can provide α-accurate q-quantiles for any 0 ≤ q ≤ 1. This version of the algorithm is both simple to understand and implement, and provides support for item insertion/deletion and merging of two compatible sketches (i.e., sketches characterized by the same α value). The main drawback of this algorithm is that the accuracy is obtained by trading off the space required: the number of buckets in a sketch can grow without bound. Owing to this limitation, the authors of DDSketch introduced in [10] an advanced version of DDSketch that can deliver α-accurate q-quantiles for q 0 ≤ q ≤ 1 with a bounded number of buckets. In this manuscript we will only deal with the second improved version of DDSketch.
DDSketch works by dividing R >0 into indexed buckets. Let B i be the bucket with index i and m the maximum number of buckets. The algorithm works reactively, by invoking a collapsing procedure if inserting a value causes the number of buckets to grow beyond m.
Denoting by γ the quantity 1+α 1−α where α represents the user's defined accuracy, the bucket B i is a counter holding the occurrences of values x falling between the interval given by γ i−1 < x ≤ γ i . Algorithm 1 shows the pseudo-code related to the insertion procedure of an item x. We assume that the number b of buckets stored in the sketch at any time is 0 ≤ b ≤ m, i.e., the number of buckets maintained is dynamic, depending on the sequence of insertions and deletions operations. Of course, the bucket indexes are dynamic as well. A bucket always holds a positive count. This is certainly true for insertion-only streams. However, DDSketch also allows deletions, in which case a bucket count may be zero. When this happens, a bucket is discarded and thrown away.
To insert a value x, the index i of the bucket in which x falls is computed as i = ⌈log γ x⌉. If the bucket B i has been already inserted into the sketch, then the bucket's counter is incremented by one. Otherwise, B i is added to the sketch with a count initialized to one. Then, if the number of buckets exceeds m after inserting x, a bucket collapsing procedure is executed, by collapsing the initial two buckets. Note that, in general, the first two buckets are not B 1 and B 2 , since the indexes of these buckets depend on the actual insertions done. Therefore, we denote in the pseudo-code these buckets as B y and B z . In particular, it holds that y < z but it is not necessarily true that z = y + 1, i.e. the indexes need not be consecutive. The buckets B y and B z are updated so that the count stored by B y is added to B z , and B y is removed from the sketch. Alternatively, the collapsing procedure can be applied to the last two buckets.
let B y and B z be the first two buckets The authors of DDSketch show that m buckets suffice to α-accurately answer a given q-quantile query if: Then, they prove the following theorem, which sets a bound to Eq. 1 for datasets drawn from sub-exponential distributions.
DDSketch, as described by its authors, only deals with R >0 . Therefore, in order to deal with R, one must use two sketches, one of which devoted to negative values.

UDDSketch
Theorem 1 holds for input data following a subexponential distribution and requires that the distribution parameters σ and b are known. However, for arbitrary and/or unknown input distributions, it is not possible to give formal guarantees on the accuracy of DDSketch when the number of buckets at disposal is limited. In such a case an error beyond the desired level may affect also the range of quantiles of interest.
We devised a different collapsing strategy for DDSketch that overcomes the problem discussed above and allows giving guarantees on the accuracy of the sketch for all of the quantiles. As expected, providing the user with an approximated result, there is a trade-off involved between the α accuracy that can be achieved and the amount of space at disposal. However, we can prove that, if the maximum and minimum of the values which can appear in input are known or can be estimated with a low probability of failure, then a strict relation exists between a desired level α of accuracy on a generic quantile query and the number of buckets needed to guarantee that accuracy.
The new collapsing strategy is named uniform collapse and, differently from the DDSketch collapsing, does not involve only two buckets, but all of the buckets, which are collapsed two by two. More precisely, for each pair of indices (i, i + 1), where i is odd and B i 0 or B i+1 0, a new bucket with index j = ⌈ i 2 ⌉ is created, whose count is the sum of the counts of B i and B i+1 and which replaces the collapsed buckets. Algorithm 2 reports the pseudocode of the uniform collapse procedure.

Algorithm 2 UniformCollapse(S)
The following lemma formally shows and justifies how uniform collapse modifies the sketch and its accuracy.
Lemma 2. The collapsing procedure applied to an αaccurate (0, 1)-quantile sketch produces an α ′ -accurate (0, 1)-quantile sketch on the same input data with α ′ = 2α 1+α 2 . Moreover, an item x falling in bucket with index i of a collapsing sketch, will fall in bucket with index ⌈i/2⌉ of the collapsed sketch.
Proof. Let B i and B i+1 be two adjacent buckets of the sketch to be collapsed. The collapsing procedure sums them up and replaces them with a new bucket, which we denote by B ′ j . Let U i , U i+1 and U ′ j = U i ∪ U i+1 denote the intervals of values which refer respectively to buckets B i , B i+1 and B ′ j . Let γ = 1+α 1−α and γ ′ = 1+α ′ 1−α ′ . We have that: from which we derive that: and, as a consequence of the relation between α ′ and γ ′ , and α and γ: Furthermore, we have that, if B i and B ′ j are the buckets in which a value x falls, respectively, before and after the collapse, then it holds that: that proves the relation between the bucket keys of the collapsing sketch and those ones of the collapsed sketch.
After collapsing the buckets, α ′ represents the new theoretical error bound for the sketch. Each time we perform a collapse, α increases, i.e. we lose accuracy. However, we do not expect executing the collapsing procedure repeatedly up to the point where the loss in accuracy adversely impacts on the data structure precluding its use. The reason is that each time a collapsing is done, the input interval covered by the m available buckets increases as well, so that a few collapsing are enough to process input data streams with very large range of values.
According to the collapsing algorithm, we can formulate the following theorem which provides an upper bound on the accuracy of the results, i.e., on the error committed approximating the quantile computations when a limited number of buckets are at disposal.

Theorem 3. Given an input whose data domain is an
interval [x min , x max ] ∈ R >0 and an UDDSketch data structure using at most m buckets to process the input, the approximation error committed by UDDSketch using the uniform collapse procedure is bounded bŷ α =˜γ Proof. In order to provide un upper bound on the accuracy achieved by the UDDSketch data structure, we analyze the worst case, i.e., the situation in which the m buckets must uniformly cover the interval [x min , x max ].
In such a case, the corresponding indexes are consecutive numbers denoted by i 1 , i 2 , · · · , i m . Let the covered interval be (γ i 1 −1 ,γ i m ]. Choosing i 1 = ⌈lgγ x min ⌉, it holds that x min falls into the first bucket B i 1 . Therefore, it holds thatγ We now show that x max falls into the last bucket B i m , i.e.,γ It holds thatγ i m =γ i 1γ m since the buckets indexes are consecutive and the buckets uniformly cover the whole interval. As a consequence, taking into account the definition ofγ, equation (7) is equivalent toγ Equation (8) holds, sinceγ i 1 x min ≥ 1, owing to equation (6). Now consider an initial α value and a corresponding initial γ such that an integer number of collapses which brings γ toγ does not exist, but it holds that γ 2 k <γ < γ 2 k+1 , for a k ∈ N. In this case, we may need a number k +1 of collapses to accommodate all of the input values and end up with a final value of γ >γ. However, even in this eventuality, the value of γ can not grow beyond γ 2 and this is the reason why the upper bound of the accuracy is set toγ In Theorem 3, we assume that the values x min and x max of the input data are known. This is not always true, but we can always estimate these values with a probability δ of failing our prediction. In that case, the bound showed by Theorem 3 holds with probability 1 − δ.
We now discuss how to, given a user desired level α of accuracy and a number of buckets sufficient to satisfy that accuracy based on Theorem 3, choose the initial value of accuracy α 0 ≤ α to start our algorithm. By construction, the sequence of α k values corresponding to the γ k values changed upon a collapsing procedure follows the recurrence equation: where k denotes the number of collapses performed. The solution to Eq. 9 is α k = tanh (2 k−1 arctanh (α 0 )). Similarly, the equation allows to compute α 0 given a final accuracy α k corresponding to k collapses.
We can use Eq. 10 to compute the initial value of the accuracy parameter for our algorithm by setting α k to the value of user desired accuracy and k to the number of collapses that we are willing to accept. There is a trade-off to take into consideration: if we go backward too far, i.e., we set k too large, we could end up with too many collapses and a decrease of performance, but with a favourable input distribution, we can obtain a better accuracy. On the contrary, if we compute the initial accuracy with a few collapses or no collapses at all (α 0 and user α coincide), we improve the performance and we may require less space, but we can not do better in 6 terms of accuracy than guaranteeing the desired alpha. We have seen that a good empirical value for k is 10.

Experimental Results
In this Section, we present and discuss the experimental investigation carried out in order to compare UDDSketch against DDSketch.
Both UDDSketch and DDSketch algorithms have been implemented in C and compiled using the GCC compiler v4.8.5 on linux CentOS 7 with optimization level O3. The tests have been executed on a workstation equipped with 64 GB of RAM and two 2.0 GHz exa-core Intel Xeon CPU E5-2620 with 15 MB of cache level 3. The source code is freely available for inspection and reproducibility of results 1 .
The tests have been performed on 15 various synthetic datasets, whose properties are summarized in Table 1. Each dataset consists of 10 million real values. Figure 1 shows the statistical distributions from which the datasets are drawn.
DDSketch is executed using in each experiment all of the possible collapsing strategies: collapses of buckets with higher IDs (DDSketch H), collapses of buckets with lower IDs (DDSketch L) and a third variant (DDSketch D) where the available buckets have been equally partitioned between two sketches, one DDSketch H and one DDSketch L. In that case, each quantile is estimated through the most accurate sketch, i.e. the sketch whose estimation comes from a non collapsed bucket. If both answers come from the collapsed bucket, the estimation from the sketch with less overall collapses is chosen.
The three variants of DDSketch and UDDSketch have been executed on each dataset in Table 1 varying the value of α and the maximum number of buckets available to the algorithm. The sets of values used are shown in Table 2.
In each test run, the performance, i.e., the number of values processed in a unit of time (updates per millisecond), and the accuracy, i.e., the relative errors committed on estimation of quantiles q 0 , q 0.1 , q 0.2 , . . . , q 1 , are measured for all of the collapsing strategies under investigation. Figures 2 and 3 show the estimation errors committed by DDSketch L, DDSketch H and DDSketch D, compared with UDDSketch. Figure 2 refers to the betaL, chisquare and exponential datasets, whilst Figure 3 is relative to the normal, pareto and uniform datasets. The 1 https://github.com/cafaro/UDDSketch number of buckets is set to 1024 and α is set to 0.001. The results obtained when processing the other datasets in Table 1 are not reported here, for saving space since they exhibit similar behaviours.
The plots show a major robustness of UDDSketch with reference to the distribution of the values in input. Even when the number of buckets granted to the algorithm is not enough to reach the desired α (dotted line), nonetheless UDDSketch guarantees an overall better accuracy, regardless of the input distribution. Even when we are only interested to specific quantiles, DDSketch does not always succeed in guaranteeing a bounded relative error, as UDDSketch does, independently of the chosen collapsing strategy. Furthermore, it is not possible for DDSketch to choose the best collapsing strategy a priori, when the input distribution is unknown. Particularly critical are the quantiles around the median, which rarely DDSketch can report with sufficient accuracy. Figures 4 and 5 show how the median and interquartile range of the relative errors on quantiles change varying the number of buckets, when α is fixed to 0.001. As in Figures 2 and 3, the plots in each column refer to the same collapsing strategy, and the plots in each rows are relative to the same dataset. The datasets examined are the same as in the previous figures.
The observations made by inspecting Figures 2 and  3 are confirmed by Figures 4 and 5. UDDSketch returns quantile estimations that are overall more accurate than DDSketch, also in terms of lower medians and shorter interquartile ranges of relative errors. Moreover, the experiments show that our solution puts to better use the number of buckets at disposal: in fact, UDDSketch keeps improving the estimate when the space granted grows, whilst DDSketch stops when the required α is reached and makes no use of the extra buckets at disposal.
At last, Figure 6 shows the performance of the different DDSketch collapsing strategies compared with UDDSketch, when varying the number of buckets and with reference to the datasets betaL, exponential, uniform. The other datasets lead to similar behaviours and are not reported.
The performance of the algorithms under test are in general comparable. UDDSketch is better when the number of buckets is low, DDSketch L and DDSketch H are more performant when the space grows, for they tends to not make use of the extra space. DDSketch D is always less performant due to the use of two sketches that must be updated at the same time.

Conclusions
We have introduced UDDSketch (Uniform DDSketch), a novel sketch for fast and accurate tracking of quantiles in data streams. Our sketch was heavily inspired by the recently introduced DDSketch, and is based on a novel bucket collapsing procedure that allows overcoming the intrinsic limits of the corresponding DDSketch procedures. UDDSketch has been designed so that accuracy guarantees can be given over the full range of quantiles and for arbitrary distribution in input. Moreover, our algorithm fully exploits the budgeted memory adaptively in order to guarantee the best possible accuracy over the full range of quantiles. Extensive experimental results on synthetic datasets have confirmed the validity of our approach.