Skip to Main Content
The role of input dimension d is studied in approximating, in various norms, target sets of d-variable functions using linear combinations of adjustable computational units. Results from the literature, which emphasize the number n of terms in the linear combination, are reformulated, and in some cases improved, with particular attention to dependence on d . For worst-case error, upper bounds are given in the factorized form ξ(d)κ(n) , where κ is nonincreasing (typically κ(n) ~ n-1/2). Target sets of functions are described for which the function ξ is a polynomial. Some important cases are highlighted where ξ decreases to zero as d → ∞. For target functions, extent (e.g., the size of domains in Rd where they are defined), scale (e.g., maximum norms of target functions), and smoothness (e.g., the order of square-integrable partial derivatives) may depend on d , and the influence of such dimension-dependent parameters on model complexity is considered. Results are applied to approximation and solution of optimization problems by neural networks with perceptron and Gaussian radial computational units.
Date of Publication: Feb. 2012