Learning Inductively and Analytically

Sebastian Thrun
University of Bonn and Carnegie Mellon University

4:00 pm Thurs. March 23 in room 1325

Research on Machine Learning and AI has led to the identification of two major learning paradigms: inductive and analytical. Inductive techniques learn purely by observing statistical regularities in the data. Analytical approaches generalize more rationally from less training data, relying instead on prior knowledge about the learning problem ("domain knowledge"). While many researchers have noted the importance of combining inductive and analytical learning, we still lack combined learning methods that are sufficiently effective in practice.

In this talk, I will present the explanation-based neural network learning algorithm (EBNN). EBNN integrates inductive neural network learning and analytical explanation-based learning, smoothly blending both learning principles. In a variety of application domains (mobile robot control, robot perception, game playing) EBNN has shown to yield superior generalization accuracies.

One of the key features of EBNN is its ability to transfer knowledge from previously encountered learning tasks to other, new learning tasks. This makes it particularly applicable to scenarios in which a learner faces a whole collection of learning tasks, e.g., over its entire lifetime. In robotics domains, which will be of particular interest in this talk, the transfer of knowledge is crucial due to the costs involved with operating robot hardware. I will argue that approaches like EBNN are necessary to overcome some of the scaling problems faced by current machine learning technology, and will outline research strategies for the design of a lifelong-learning robot.