Adaptive Membership Functions and F-Transform

The definition of the fuzzy-transform (F-transform) has been limited mainly to 1-D signals and 2-D data due to the difficulty of defining membership functions, their centres, and support on a domain with arbitrary dimensionality and topology. We propose a novel method for the adaptive selection of the optimal centres and supports of a class of radial membership functions based on minimizing the reconstruction error of the input signal as the F-transform and its inverse, or as a weighted linear combination of the membership functions. Replacing uniformly sampled centres of the membership functions with adaptive centres and fixed supports with adaptive supports allows us to preserve the input signal's local and global features and achieve a good approximation accuracy with fewer membership functions. We compare our method with uniform sampling and previous work. As a result, we improve the image reconstruction with respect to compared methods and we reduce the underlying computational cost and storage overhead. Finally, our approach applies to any class of continuous membership functions.


Adaptive Membership Functions and F-Transform Simone Cammarasana and Giuseppe Patané
Abstract-The definition of the fuzzy-transform (F-transform) has been limited mainly to 1-D signals and 2-D data due to the difficulty of defining membership functions, their centres, and support on a domain with arbitrary dimensionality and topology.We propose a novel method for the adaptive selection of the optimal centres and supports of a class of radial membership functions based on minimizing the reconstruction error of the input signal as the F-transform and its inverse, or as a weighted linear combination of the membership functions.Replacing uniformly sampled centres of the membership functions with adaptive centres and fixed supports with adaptive supports allows us to preserve the input signal's local and global features and achieve a good approximation accuracy with fewer membership functions.We compare our method with uniform sampling and previous work.As a result, we improve the image reconstruction with respect to compared methods and we reduce the underlying computational cost and storage overhead.Finally, our approach applies to any class of continuous membership functions.

I. INTRODUCTION
S EVERAL transformations [e.g., the Fourier, Laplace, and fuzzy-transform (F-transform)] have been applied to signal approximation and analysis.In particular, F-transform converts a real, continuous, and bounded function into a finite vector of components, which can be applied to reconstruct the input signal up to arbitrary precision.In fuzzy signal processing [1], [2], [3], [4] and analysis [5], [6], [7], the selection of the parameters (e.g., centres and supports) of the membership functions has a fundamental role in the accuracy of the reconstruction.Previous work focuses on signal-driven sampling, preservation of the geometric features of the samples (e.g., blue-noise property) and reconstructed signal (e.g., absence of artefacts).
Nonuniform fuzzy partitioning [8] and learning-based centres selection have been applied to signal approximation.In [9], the number of basic functions is estimated from a family of parametric fuzzy membership functions and an optimization criterion is applied to obtain the F-transform with best approximation properties.Assuming a specific number and type of basic functions [10], the minimization of the error functional with respect to the position of the nodes of the partitions is applied through interior point methods and sequential quadratic programming.In [11], a multistep reconstruction through the F-transform with an arbitrary mask (e.g., uniform subsampling/arbitrary missing pixels) on regular grids has been proposed.Adaptive sparse sampling [12] computes meshless centres for image reconstruction accounting for the space-frequency information content of image patches.The reconstruction of the image from a uniform subsampling of the input grid is achieved through cubic convolution interpolating kernel [13] and convolutional neural network [14].Gaussian mixture models [15] and kernel-based sampling [16] approximate an input signal as a mixture of probability distributions or a linear combination of radial basis functions.Clustering [17] is applied to group those points that satisfy a common property (e.g., planarity, closeness) and each membership function is centred at a representative point of each cluster.Finally, we mention physics-based [18], stochastic [19], optimal transport [20], and Voronoi [21] sampling.
As the main drawback of previous work, the application of the F-transform has been limited mainly to 1-D signals and 2-D data due to the difficulty of defining membership functions, their centre and support on a domain with any dimension and topology.Commonly, the centres of the membership functions are uniformly sampled on the input domain, thus disregarding the local/global features of the input signal, the selected class of membership functions, the underlying computational cost, and storage overhead.Uniform sampling is inefficient as the number of centres of the membership functions rapidly increases with the sampling of the input domain, and their location needs to be adapted to the input signal to be reconstructed.Similarly, the supports of the membership functions are commonly set to a fixed, constant value.
To improve the approximation and reconstruction of an input signal f on a 2-D arbitrary domain, we propose a novel sampling method that accounts for the input signal for the adaptive selection of the optimal centres and supports of the radial membership functions.We compute the optimal centres and supports through the minimization of an energy functional defined as the norm between the input and the signal reconstructed as (i) the composition of the F-transform and its inverse F −1 (Ff ), (ii) the weighted linear combination of the membership functions, named, respectively, (i) adaptive fuzzy and (ii) adaptive weighted sampling.The main criteria for the selection of the centres and supports of the membership functions are adaptation to the input domain and signal to guarantee a good reconstruction of local and global properties; feature preservation without either over-smoothing or artefacts in the sampling or the reconstructed signal; spectral properties (e.g., blue-noise property) that allow us to achieve a more accurate signal reconstruction.
We apply the adaptive selection of the centres and supports of the 2-D Gaussian membership functions to 2-D images and compare the sampling and reconstruction results to uniform sampling, where the centres of the membership functions are the nodes of a 2-D regular grid and the supports' size is constant (see Section II).For the solution of the minimization problems (i) and (ii), we, respectively, select principal axis (PRAXIS) [22] and limited-memory Broyden, Fletcher, Goldfarb, Shanno (L-BFGS) [23] optimizers, as they accurately approximate the optimal solution.
Our tests show good qualitative results, evaluated with mse, normalized cross correlation (NCC), and normalized root mean square error (NRMSE) metrics, and qualitative results, such as feature preservation.Through adaptive sampling, the image reconstruction has better quantitative results than the uniform sampling; in the experimental examples, the NRMSE value is halved while the NCC value improves by more than.Furthermore, our method improves the mse value from 0.017 to 0.0017 when increasing the number of membership functions from 250 to more than 2K with a 256 × 256 image.
We analyze the accuracy of the reconstruction accounting for the properties of the input image in terms of the distribution of the gray-scale values.We also discuss the convergence of the optimization method for the minimization of the functional for the adaptive selection of the centres and supports of the membership functions in terms of the number of functional evaluations and the properties of the Gaussian membership functions in terms of global and local properties of the support (see Section III).Radial membership functions with the adaptive centres and supports improve the approximation of the signal with respect to uniform sampling; the F-transform guarantees a higher accuracy than a least-squares approximation with a weighted linear combination of membership functions.In our discussion, we focus on Gaussian membership functions and 2-D images; however, our approach applies to any class of continuous membership functions (see Section IV).

II. ADAPTIVE RADIAL MEMBERSHIP FUNCTIONS
In the 1-D and 2-D F-transforms (see Section II-A), the membership functions are typically triangularly shaped or radial, with fixed centres and supports.A common choice is to select the centres as a uniform subsampling of the input domain (e.g., interval and 2-D square) or cluster the input data.Uniform sampling is highly inefficient as the number of centres of the membership functions rapidly increases with the sampling of the input domain, and their location is not adapted to the input signal to be reconstructed.Similarly, the width of the support of the membership functions is commonly set to a fixed, constant value.We propose a class of radial membership functions (see Section II-B) and a novel method for the adaptive computation of their optimal centres and supports based on the reconstruction error of the input signal as (i) the F-transform or (ii) a weighted linear combination of the membership function, named, respectively, (i) adaptive fuzzy and (ii) adaptive weighted sampling (see Section II-C).

A. F-Transform
Let us consider the space L 2 (Ω) of square integrable functions defined on a compact and connected domain Ω of R n , endowed with the L 2 (Ω) scalar product f, g 2 := Ω f (q)g(q)dq and the corresponding norm f 2 2 := Ω |f (q)| 2 dq.Finally, we recall that the support of a function f : Ω → R is the closed set defined as supp(f Given the space C 0 (Ω) of continuous functions defined on Ω, a family of functions i=1 is a fuzzy partition of Ω with centres {p i } n i=1 if the following properties hold for each i: A i (q) > 0, for all q ∈ Ω.Given a fuzzy partition, the F-transform [24], [25], [26], [27] Since the function f is known at a set of points in Ω, the definition in ( 1) is replaced by the discrete Ftransform , i = 1, . . ., n.We refer to the F-transform as Ff .Finally, the discrete F-transform is applied to recover an approximation f F,n of the function f underlying the set of values (f (q j )) m j=1 through the inverse F-transform [24].We refer to the inverse F-transform as F −1 .

B. Radial Membership Functions and F-Transform
Let us consider a kernel K : Ω × Ω → [0, 1] that is (i) continuous, (ii) symmetric (K(p, q) = K(q, p)), and nonnegative (i.e., K(p, q) ≥ 0), for all p, q ∈ Ω.According to the definition of fuzzy partition, any kernel that satisfies the properties above induces a membership function A p i : Ω → R, A p i := K(p i , q), centred at p i , by applying the normalization K(p i , q) := K(p i , q)/ j K(p j , q), q ∈ Ω.
Analogously to [28], we consider the family of radial kernels K(p, q) := ϕ( p − q 2 ), with ϕ : R + → R generating function, and the corresponding radial membership function centred at p i defined as ϕ i (q) := ϕ( q − p i 2 ), i = 1, . . ., n.Assuming that the generating function is positive, the positivity of the membership functions is always satisfied.The sum-one property is satisfied by normalizing the membership functions as ϕ i (q) := ϕ i (q) n j=1 ϕ j (q) .Depending on the properties of ϕ, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
Selecting compactly-supported membership functions generally provides a lower memory storage and computation cost.Wider shapes reduce the accuracy of the approximation due to the overlapping of different classes in the fuzzy partition; also, the preservation of local properties is less accurate.Recalling that Ω is a compact domain and noting that {supp(ϕ i )} i∈I is a covering of Ω with closed sets, we conclude that it is always possible to find a finite set {p i } i∈I of points such that F-transform induced by radial membership functions: The direct and inverse F-transforms are linear operators that can be computed through their matrix representation of the continuous definition.In particular, the F-transform is discretised by the matrix Φ ∈ R m×n and its application to a function f defined on a set of points , or equivalently the matrix representation

C. Adaptive Membership Functions
To evaluate the input signal at any point and guarantee a smooth approximation, we select continuous membership functions.Even though the proposed approach can be applied to any class of continuous membership functions, for our discussion we select Gaussian membership functions where p i and σ i are their centre and widths, respectively.Given an input signal f : Ω → R and a set of Gaussian membership functions {ϕ i } i∈I (2), we search their optimal centres P := (p i ) i∈I and widths σ := (σ i ) i∈I by minimizing the L 2 (Ω) norm between f and its approximation f .The approximating function is computed either (i) through the F-transform and its inverse, i.e., adaptive fuzzy sampling (see Section II-C1) or (ii) as a weighted linear combination of the membership functions, i.e., adaptive weighted sampling (see Section II-C2).

1) Adaptive Fuzzy Sampling:
We reconstruct the approximation f of the input signal through the F-transform F and its inverse F −1 as F −1 (Ff ) and minimize the corresponding least-squares error f − F −1 (Ff ) 2 .Given the normalized Gram matrix Φ and its pseudoinverse Φ † , the approximating function is f = Φ † Φf , which depends on P and σ.Then, the set of optimal centres P and widths σ are the solution to min ( This method involves n(d + 1) variables, i.e., nd variables for the coordinates of the n centres and n variables for the widths.
Centres' and supports' optimization: The approximating function of our optimization problem is f = Φ † Φf (3).This implies the solution of the linear system Φ f = Φf , which corresponds to Φ Φ f = Φ Φf in the least-squares sense.The conditioning of the coefficient matrix is reduced through ( Φ Φ + λI) f = Φ Φf .Finally, we define the energy functional of the minimization problem as f − ( Φ Φ + λI)\( Φ Φ f ) 2 , where ( Φ Φ + λI) is the coefficient matrix and ( Φ Φ f ) is the right-hand side of the linear system.The value of λ is experimentally selected as 10 −6 , as larger values reduce the conditioning of the coefficient matrix at the cost of lower solution accuracy.
The main operations for the computation of the energy functional are the sparse matrix-matrix multiplication, which takes O(k 2 m) operations where k is the number of nonzero elements in each row of the sparse matrix, and the solution of the linear system.According to [34], we select the iterative biconjugate gradient stabilized (BICGSTAB) method based on Lanczos bi-orthogonalisation with the block-Jacobi preconditioner as the solver.BICGSTAB comprises two matrix-vector multiplications, two scalar products on vectors, one vector norm, and two vector sum operations that are linear with the elements of the vectors.The computational cost is O(kmt) with t number of iterations that depends on the required accuracy, k non-zero elements in each row of the sparse matrix, and m input points.BICGSTAB guarantees the stability of the solution and scalability properties.
Our adaptive fuzzy sampling (3) is a minimization problem where the functional is nonconvex, nonlinear, and the derivatives in the analytic form are not available.These properties affect the selection of the optimizer, induce a large number of iterations of the optimizer and, consequently, evaluations of the functional.In this context, a global optimizer (e.g., DIRECT-L) computes the optimal solution with a high computational cost of O(2 n ) as worst-case, with n variables.To solve our problem, we apply PRAXIS, a gradient-free local optimizer that minimizes a multivariate function through the PRAXIS method.PRAXIS is a modification of Powell's direction-set method [35]; given n variables, the set of search directions n is repeatedly updated until a set of conjugate directions with respect to a quadratic form is reached after n iterations.To ensure the correct minimum is found, the matrix of the search directions is replaced by its principal axes so that the direction set spans the entire parameter space.PRAXIS optimizes an accurate solution with reduced O(n 2 ) computational cost.We refer the reader to [36] for further comparison between global and local optimizers.
2) Adaptive Weighted Sampling: First, the reconstructed signal is defined as a linear combination of the membership functions f (q) = n j=1 ϕ j (q).The set of centres P and widths σ Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
are the solution to After solving the minimization problem, we define the Gram matrix Φ associated with the membership functions with optimal centres and supports.We compute the weights α = (α j ) n j=1 that minimize this optimization problem corresponds to the solution of the corresponding normal equation where the coefficient matrix Φ is the m × n Gram matrix associated with the membership functions {ϕ j } n j=1 and the right-hand side is the n−vector input signal f .
Centres' and supports' optimization: The minima of the energy functional in ( 4) and ( 5) vanish its partial derivatives with respect to the variables.The optimization problem in (4) involves ñ = n(d + 1) variables, i.e., nd variables for the d coordinates of the n centres and n variables for the widths.The computation of the weights involves ñ = n variables, one for each membership function.The minima of the discrete energy functional are computed through the iterative optimization method L-BFGS, which finds the roots of the derivative of the energy functional.We briefly recall that L-BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden-Fletcher-Goldfarb-Shanno algorithm (BFGS) using a limited amount of computer memory.
Analogously to BFGS, the L-BFGS solver estimates the inverse Hessian matrix for the minimum search in the variable space; however, the L-BFGS method represents the approximation implicitly through a few vectors, thus involving a limited memory requirement.At each iteration, a small history of the past updates of the variables and the gradient of the energy functional in (4) and ( 5) is used to identify the direction of steepest descent and to implicitly perform operations requiring vector products with the inverse Hessian matrix.For the L-BFGS method, the memory storage is O(ñ 2 ) and the computational cost is O(ñv) at each iteration, where v is the number of steps stored in memory.The computational cost of the linear combination of Gaussian membership functions is O(ñ + m) and the memory storage is O(ñ 2 ).

III. EXPERIMENTAL RESULTS
We introduce the quantitative and qualitative metrics (see Section III-A) and the experimental tests (see Section III-B).We compare our adaptive methods with uniform sampling, where the membership functions are centred at the nodes of a downsampling of the input grid with a ratio of one node each of four nodes per row and column, and their supports are constant with fixed width.In the F-transform case, the approximated signal is computed as F −1 (Ff ).In the weighted linear combination case, the weights are the solution to (6).Then, we compare our method with previous work methods from different classes: cubic convolution from uniform subsampling [13], adaptive sparse sampling [12], and multistep F-transform [11].For our experimental tests, we use "WWF" Fig. 1(a), "Pepsi" (see Fig. 2 top-left), "MRI" (see Fig. 3 top-left), and "Puccini" (see Fig. 4 top-left) 2-D images.

A. Quantitative and Qualitative Metrics
Given an input signal and its approximation defined on m points, we evaluate the NCC, , where fa and f a are the average values of the reconstructed and input signal respectively, the mean square error . NCC value varies from 0 (worst case) to 1 (best case), while mse and NRMSE go from +∞ (worst case) to 0 (best case).As qualitative metrics, we consider the feature preservation without either over-smoothing or artefacts in the sampling or the reconstructed signal, and spectral properties (e.g., blue-noise) that allow us to achieve a more accurate signal reconstruction.

B. Discussion
We compare the results (see Figs. 2-4) of i) the adaptive fuzzy (see Section II-C1), ii) adaptive weighted (see Section II-C2), iii) uniform fuzzy, and iv) uniform weighted sampling and reconstruction.
As parameters, we use 256 × 256 input images (n = 4096 centres) and 192 × 192 input images (n = 2304 centres).In (i) and (ii), the centres and supports are computed through the minimization of the energy functional described in Sections II-C1 and II-C2, respectively.In (iii) and (iv), the centres are fixed on the regular grid of the input image, with a downsampling ratio of one each four per row and column; the supports are experimentally fixed through the width of each Gaussian membership function at 0.01.In (ii) and (iv), the weights are computed through the solution of a linear system (6).We also mention that given an input image defined on 256 × 256 = 65 536 points, all the methods have 65 536/16 = 4096 centres, and (i) has 4096 × 3 variables, (ii) has 4096 × 4 variables, (iii) does not have an optimization phase, and (iv) has 4096 variables.
Our adaptive sampling methods improve the results of the uniform sampling methods, thus showing that the parameters of the membership functions can be optimized according to the input signal.Furthermore, adaptive fuzzy improves most examples' approximation results of adaptive weighted sampling.In the Pepsi image (see Fig. 2 and Table I), adaptive fuzzy sampling has better preservation of gray-scale values; the mse value is 0.002, compared with 0.005 of the uniform fuzzy, and 0.003 of the adaptive weighted sampling.In magnetic resonance images (see Fig. 3 and Table II), adaptive weighted sampling has better results, and adaptive fuzzy improves the results of uniform fuzzy sampling.In the Puccini image (see   [13], (e) multistep fuzzy [11], (f) adaptive sparse [12], and (g) adaptive fuzzy selected centres (4096 centres).

TABLE I WITH REFERENCE TO FIG. 2, WE REPORT THE QUANTITATIVE METRICS OF THE METHODS
Fig. 4 and Table III), adaptive fuzzy has the best results in terms of quantitative metrics, with an mse value of 0.0004 compared with 0.0007 and 0.0018 of adaptive weighted and uniform fuzzy sampling, respectively.A dot-pattern artefact is highly present in uniform fuzzy and slightly present in adaptive fuzzy sampling.Linear combination of Gaussian membership functions (i.e., uniform/adaptive weighted sampling) well performs when the input image has a pre-eminent dark or light gray-scale value (see Fig. 5): the required number of centres of membership functions, and consecutively the approximation accuracy, exponentially increase from dark to light area.This method approximates a constant black area with few membership functions; if the area is Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.[13], (e) multistep fuzzy [11], (f) adaptive sparse [12], and (g) adaptive fuzzy selected centres (2304 centres).[13], (e) multistep fuzzy [11], (f) adaptive sparse [12], and (g) adaptive fuzzy selected centres (2304 centres).
constant white, then it is sufficient to approximate the negative of the input image f (e.g., |255 − f | in a 0−255 gray-scale).For this the adaptive weighted sampling performs well with the biomedical image [see Fig. 5(b)], where the results outperform adaptive fuzzy sampling.In contrast, the Puccini image has a uniform distribution of gray-scale values on the 0-255 scale, and the adaptive weighted sampling does not provide good approximation results on this image.Also, the Pepsi image has both dark and light constant areas [see Fig. 5(c)]; the adaptive weighted sampling well approximates the white area but does not approximate the dark ones (or vice versa, depending on the selection of the input image: normal versus negative) correctly.In contrast, our adaptive fuzzy sampling does not depend on the gray-scale values distribution of the image.It approximates with the same level of accuracy both Pepsi, biomedical, and Puccini images.We compare our method with [11], [12], and [13] methods.With the same number of centres (e.g., 4096 in Fig. 2), our method outperforms previous work in quantitative metrics and visual evaluation of the reconstructed image (Figs.2-4 and Tables I-III).For example, the mse value on the "Pepsi" image is 0.008 in [13], while our adaptive fuzzy sampling has a result of 0.002.Also, the blurring effect is more evident in previous work (e.g., MRI head shape and Puccini silhouette) with respect to our adaptive fuzzy and weighted sampling.
Sampling and membership functions: Among different classes of membership functions, such as harmonic locally supported or polynomial globally supported (see Fig. 6), we select the locally supported Gaussian membership functions for all our tests.Even though the support of the Gaussian membership function (2) at p i is global, we consider only the contribution of the points that fall inside the sphere of centre p i and radius σ i .This choice is motivated by: 1) the exponential decay of exp(− (q − p i 2 /σ i ) as the distance q − p i 2 increases and 2) the need to work with a Gram matrix Φ that is sparse.We also mention that our method is general to apply different membership functions.Fig. 7 shows the centres' selection of the adaptive weighted (first row) and adaptive fuzzy (second row) sampling.The adaptive weighted sampling optimizes the position of the centres in terms of geometric preservation of the input signal and bluenoise property.In the adaptive fuzzy sampling (i.e., the fuzzy partition), the pattern of the centres is regular where the image is uniform (e.g., the background of the Puccini image), while the centres partially adapt to the geometries of the image; in particular, we recognize the letters of the Pepsi logo, the Puccini silhouette, and the head profile.Fig. 8 (right) shows the progressive centres' selection of the adaptive fuzzy sampling during the iterations of the optimization method of a black-and-white image representing the letter "C."At the beginning of the optimization, the centres are placed on the regular grid since we initialize the variables with uniform sampling.In contrast, at the end of the iterations, the centres approximate the contour of the letter.Furthermore, Fig. 8 (left) shows the color map of the displacements of each centre; from the beginning position on the regular grid, red colors are associated with centres with a higher position shift.Fig. 9 and Table IV show the sampling, reconstruction, and quantitative metrics of the F-transform applied to the WWF logo when increasing the number of centres; in particular, the NRMSE value passes from 0.157 with 250 centres to 0.049 with more than 2K centres.
In both adaptive methods, the centres and supports of the membership functions adapt to the characteristics of the signal, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.improving the reconstruction of the input signal.Fig. 10 first row shows the covering of the input points ∪ i∈I supp(ϕ i ) for both methods.In the adaptive weighted sampling (a), the number of membership functions whose support covers an input point varies from 0 (i.e., white areas of the input image) to 47 (i.e., black areas of the input image).The centres move to white-valued pixels; the domain Ω could not be fully covered depending on the image.In the adaptive fuzzy sampling (b), the number of membership functions whose support covers an input point varies from 6 to 18; the centres and supports adapt to the signal and allow us to cover all the input domains.In the uniform sampling, all the input points are covered due to the uniform position of the centres.The adaptive fuzzy sampling is designed for the approximation and reconstruction of the input image without accounting for the position of the centres with respect to image patterns.On the other hand, adaptive fuzzy sampling has uniform results regarding reconstruction accuracy with any image.In contrast, the results of adaptive weighted sampling are affected by the histogram of the gray-scale values (see Fig. 5).Fig. 10 second row shows the centres and widths of our adaptive methods.Our optimized centres and widths adapt to the input signal to improve the approximation of the signal.The width of each membership function is proportional to the color in a scale from blue (small width) to yellow (large width).The width is smaller at the edges of image patterns to better approximate local features (e.g., panda's silhouette).It is larger where the gray value is more uniform (e.g., image background).We mention that adaptive weighted sampling is defined as a sum of Gaussian and does not place any centre where the pixel value is 0. On the contrary, adaptive fuzzy sampling applies the F-transform and its inverse, and the centres are placed in the image's dark and light areas.
Optimization, execution time, and memory storage: Our adaptive fuzzy sampling is an unconstrained and nonconvex problem where analytic derivatives are unavailable.PRAXIS applies a local optimization without any knowledge of the analytic derivatives.The search direction is computed through the identification of the PRAXIS.For these reasons, the optimizer can select the wrong optimization directions that increase the functional value and generate nonmonotonic behavior.In this case, the searching direction is modified to reach the functional minima.For the adaptive weighted sampling, we apply L-BFGS that uses the analytic derivatives of the functional.Fig. 11 shows the functional behavior and the experimental convergence during the minimization of the functional of adaptive fuzzy sampling (3) through the optimizer PRAXIS, and adaptive weighted sampling [see (4) and ( 5)] through the optimizer L-BFGS.The adaptive fuzzy and weighted samplings converge to an approximation of the optimal solution.The proof of the convergence of the optimizers is described in [22] and [23].
To reconstruct the Puccini image with the same level of reconstruction accuracy (see Table V), our adaptive fuzzy sampling uses 2K centres; in contrast, the methods in [12] and [13] need more than 8K centres.A lower number of centres corresponds to lower memory storage and computational cost for image reconstruction.The adaptive fuzzy is slower than the adaptive  weighted sampling, i.e., 1500 versus 800 seconds (see Table V) due to the execution time of the optimizer.Uniform fuzzy and weighted samplings do not require the solution of a minimization problem to determine the optimal centres, and their execution time are comparable to state-of-the-art methods at the cost of lower accuracy.The experimental tests have been performed with MATLAB 2022a and the NLopt package [37].Further improvements in terms of execution time could be achieved with high-performance computing (HPC) approaches [34].

IV. CONCLUSIONS AND FUTURE WORK
We have presented a novel method for approximating and reconstructing 2-D images with adaptive 2-D Gaussian membership functions through the F-transform and a weighted linear combination of the membership functions.The comparison of the adaptive sampling with the uniform sampling shows us that the optimization of the centres and support of the membership functions improve the accuracy of the approximation and reconstruction, both in terms of quantitative metrics and visual results, and the F-transform guarantees a higher accuracy than the linear combination of membership functions.The adaptive F-transform is independent of the image properties regarding the distribution of gray values.The sampling preserves the geometries and features of the input image.Our method is general with respect to different membership functions (e.g., polynomial and harmonic).
As the main limitation, our method requires an a priori selection of the number of membership functions.Furthermore, the optimization method for the adaptive F-transform requires many functional evaluations and, consequently, a high execution time.In future work, we want to apply our method to different data, e.g., 3-D images, signals defined on unstructured grids, and vector-valued signals.Furthermore, we plan to investigate the up-sampling of 2-D signals through fuzzy and linear approximation and to analyze different initialisation strategies for the adaptive fuzzy centres and widths to improve the accuracy of the solution.Finally, we plan to investigate high-performance computing implementation to reduce the execution time and extend the method to higher dimensional problems.
For instance, on a 2-D domain Ω := [a, b] × [c, d] the number of centres grows quadratically with the sampling of [a, b] and [c, d].
..,m i=1,...,n is the Gram matrix of the radial membership functions and Φ = D −1 Φ is the normalized Gram matrix, where D is the diagonal matrix whose entries are the sum of the rows of Φ, i.e., D := diag(d) ∈ R n×n , d(i) := m j=1 Φ(i, j).The matrix representation of the discrete inverse F-transform is defined as the Moore-Penrose pseudoinverse Φ † .

Fig. 8 .
Fig. 8. (Left) Input image 192 × 192 and 2304 centres of adaptive fuzzy sampling, with displacement information from start to the end position; (right) progressive optimization of the position of the centres.

Fig. 10 .
Fig. 10.First row: (a) coverage of adaptive weighted, range 0-47 and (b) adaptive fuzzy sampling, range 6-18; blue values represent pixels with lower coverage.Second row: (c) centres and widths of adaptive weighted and (d) adaptive fuzzy sampling; color of each dot represents the width of the centre, from blue value (small width) to yellow value (large width).

TABLE II WITH
REFERENCE TO FIG. 3, WE REPORT THE QUANTITATIVE METRICS OF THE METHODS

TABLE III WITH
REFERENCE TO FIG. 4, WE REPORT THE QUANTITATIVE METRICS OF THE METHODS

TABLE IV CONCERNING
FIGS. 9 AND 1, WE REPORT THE QUANTITATIVE METRICS WITH RESPECT TO THE NUMBER OF CENTRES

TABLE V CONCERNING
THE PUCCINI TEST, WE REPORT THE EXECUTION TIME AND THE NUMBER OF VARIABLES FOR EACH STATE-OF-THE-ART METHOD TO REACHTHE ACCURACY OF OUR ADAPTIVE FUZZY SAMPLING