Robust self-localisation and navigation based on hippocampal place cells
Résumé
A computational model of the hippocampal function in spatial learning is presented. A spatial representation is incrementally acquired during exploration. Visual and self-motion information is fed into a network of rate-coded neurons. A consistent and stable place code emerges by unsupervised Hebbian learning between place- and head direction cells. Based on this representation, goal-oriented navigation is learnt by applying a reward-based learning mechanism between the hippocampus and nucleus accumbens. The model, validated on a real and simulated robot, successfully localises itself by recalibrating its path integrator using visual input. A navigation map is learnt after about 20 trials, comparable to rats in the water maze. In contrast to previous works, this system processes realistic visual input. No compass is needed for localisation and the reward-based learning mechanism extends discrete navigation models to continuous space. The model reproduces experimental findings and suggests several neurophysiological and behavioural predictions in the rat.