Skip to Main Content
Recently, it has been shown that top-down connections improve recognition in supervised learning. In the work presented here, we show how top-down connections represent temporal context as expectation and how such expectation assists perception in a continuously changing physical world, with which an agent interacts during its developmental learning. In experiments in object recognition and vehicle recognition using two types of networks (which derive either global or local features), it is shown how expectation greatly improves performance, to nearly 100% after the transition periods. We also analyze why expectation will improve performance in such real world contexts.