Skip to Main Content
We propose novel direct and indirect learning predistorters (PDs) that employ a new baseband simplicial canonical piecewise linear (SCPWL) function. The performance of the proposed PDs is easily controlled by varying the number of segments of the SCPWL function. When comparing to polynomial-based PDs, our SCPWL-based PDs are more robust for modeling strong nonlinearities and are less sensitive to input noise. In particular, we show that noise appearing in the feedback path of an indirect learning SCPWL-PD has negligible effect on the performance while the polynomial counterpart suffers from a noise-induced coefficient bias. We consider adaptive implementations of both Hammerstein-based and memory-based SCPWL PDs; the former featuring less parameters to be identified while the latter renders more straightforward parameter identification. When deriving the PD algorithms, we avoid a separate PA identification step which allows for a true real-time, or sample-by-sample, implementation without an alternating PA and PD identification procedure. However, to arrive at efficient sample-by-sample algorithms for Hammerstein PDs we need to bypass the problem of the associated nonconvex cost function. This is done by employing a modified, linear-in-the-parameter, Wiener model whose parameters can be explicitly or implicitly used for both indirect and direct learning. Extensive simulations confirm that the proposed SCPWL PDs outperform their polynomial counterparts, especially when noise is present in the feedback path of the indirect learning structure. The same is also verified by circuit level simulations on the Freescale MRF6S23100H class-AB PA in an 802.16d WiMAX system.