Unsupervised learning of high-level invariant visual representations through temporal coherence
Temporal coherence principle is the idea of neglecting rapidly changing compo- nents of a temporal signal while keeping to the slowly varying ones, in order to extract useful invariances from the signal. We note that most of the applications of tempo- ral coherence principle to visual stimuli aim at modeling invariances in early vision (mostly deriving invariance properties of complex cells in primary visual cortex). Temporal coherence implementing networks that can accomplish the more challeng- ing task of modelling invariances in higher vision and perform reasonably well on real-world object data-sets requiring some such complex invariant recognition capa- bility are scarcely found. In this work, we try to address this issue by investigating whether a speci c variant of the idea of temporal coherence, i.e. slow feature analysis (SFA), can be used to build high-level visual representations that might be useful for invariant object recognition tasks. To date, we know of no network implementa- tion of SFA that is put to challenge on a real-world data-set, rather than on some toy sets of simple, arti cial stimuli. To this end, we use single SFA implementing nodes and very generic feed-forward network architectures to see whether SFA itself is capable of modeling high-level invariances in realistic object datasets. We test our models on two datasets that require some such capability for good recognition performance: rstly, on a dataset of letters undergoing translation, planar rotation and scale changes, and secondly on the COIL-20 dataset to see whether SFA can successfully learn view-point invariance. Our results suggest that SFA can yield sat- isfactory results on these datasets especially when used as a pre-processing step for even very simple supervised classi cation algorithms. The major limitations for the application of SFA to realistic object databases have been the requirement of large training sets for successful learning and the tendency to quickly over t the training data as the SFA models become slightly more complex (especially for SFA-3 and SFA-4).