Robotic vehicles operating in outdoor environments, commonly referred to as unmanned ground vehicles (UGV), are confronted with unstructured/semi-structured environments that are variable in nature. The geographical location significantly influences the environment's appearance, there are longer term seasonal cycles, as well as immediate affects such as the weather and lighting conditions. This environmental diversity has long caused researchers considerable grief, as developing a generalized terrain classification algorithm has proven to be very difficult. Researchers have skirted this problem by relying upon ranging sensors and constructing 2½D or, more recently, 3D world representations. Although geometric representations have been used extensively orientation errors limit the lookahead distance. An important UGV capability is high speed traversal, hence extending the lookahead distance that in turn increases the maximum attainable vehicle speed is an active area of research. This focus on high speed traversal in variable environments has pushed researchers to investigate techniques that allow learning from experience, in a more human like manner. This paper presents Defence R&D Canada - Suffield's progress in extending a 2½D world representation using vision and learning to infer geometry.