Skip to Main Content
Inductive learning of nonlinear functions plays an important role in constructing predictive models and classifiers from data. We explore a novel randomized approach to construct linear representations of nonlinear functions proposed elsewhere [H. Kargupta (2001)], [H. Kargupta et al., (2002)]. This approach makes use of randomized codebooks, called the genetic code-like transformations (GCTs) for constructing an approximately linear representation of a nonlinear target function. We first derive some of the results presented elsewhere [H. Kargupta et al., (2002)] in a more general context. Next, it investigates different probabilistic and limit properties of GCTs. It also presents several experimental results to demonstrate the potential of this approach.