This paper formulates an evidence-theoretic multimodal unification approach using belief functions that take into account the variability in biometric image characteristics. While processing nonideal images, the variation in the quality of features at different levels of abstraction may cause individual classifiers to generate conflicting genuine-impostor decisions. Existing fusion approaches are nonadaptive and do not always guarantee optimum performance improvements. We propose a contextual unification framework to dynamically select the most appropriate evidence-theoretic fusion algorithm for a given scenario. In the first approach, the unification framework uses deterministic rules to select the most appropriate fusion algorithm; while in the second approach, the framework intelligently learns from the input evidences using a 2nu-granular support vector machine. The effectiveness of the unification approach is experimentally validated by fusing match scores from level-2 and level-3 fingerprint features. Compared to existing fusion algorithms, the proposed unification approach is computationally efficient, and the verification accuracy is not compromised even when conflicting decisions are encountered.