This talk presents an overview of a computational approach toward understanding the
different contributions of the neocortex and hippocampus in learning and memory. 
The approach is based on a set of principles derived from converging biological,
psychological, and computational constraints.  The most central principles are that
the neocortex employs a small learning rate and overlapping distributed
representations to extract the general statistical structure of the environment,
while the hippocampus learns rapidly using separated representations to encode the
details of specific events while minimizing interference.  The application of these
principles to recognition memory phenomena using biologically-based neural network
models of the neocortex and hippocampus will be presented.  These models show how
the neocortical and hippocampal contributions differ in their sensitivity to
manipulations of item similarity and
interference, making predictions that have been confirmed in neuropsychological and
behavioral tests.

