I. Introduction
The artificial neural network (ANN), such as single hidden layer feedforward neural (SLFN) networks, has been successfully employed for classification and regression problems [1], [2] due to their universal approximation capability [3], [4]. One classical learning technique to train SLFN is backpropagation (BP) algorithm [5], which usually suffers from slow convergence, local minima problem, and sensitive to learning rate. To alleviate these problems, randomization-based algorithms, such as Schmidt et al. [6] network, random vector functional link (RVFL) neural network (NN) [7], extreme learning machine (ELM) [8], and radial basis function network (RBFN) [9], [10], have been proposed. Pao et al. [7] and Pao and Takefuji [11] proposed the RVFL network, which has simple architecture and efficient performance. The training process of RVFL model has two stages. In the first stage, all the weights and biases from the input layer to the hidden layer are generated randomly within a given domain [12] and are fixed throughout the training phase. In the second stage, the output parameters are determined analytically via a closed-form solution [13], [14]. The RVFL network is fast in training and has good generalization performance. Thus, it has been employed in several areas, such as epileptic seizure classification [15], time series forecasting [16], daily crude oil price forecasting [17], and nonlinear system identification [18].