Computer Sciences Dept.

Symbolic and Neural Learning Algorithms: An Experimental Comparison

Jude W Shavlik, Raymond J Mooney and Geoffrey G. Towell
1989

Despite the fact that many symbolic and neural network (connectionist) learning algorithms are addressing the same problem of learning from classified examples, very little is known regarding their comparative strengths and weaknesses. Experiments comparing the ID3 symbolic learning algorithm with the perceptron and back-propagation neural learning algorithms have been performed using several large real-world data sets. Back-propagation performs about the same as the other two algorithms in terms of classification correctness on new examples, but takes much longer to train. The effects of the amount of training data, imperfect training examples, and the encoding of the desired outputs are also empirically analyzed. Suggestions for handling imperfect data sets are described and empirically justified. Symbolic and neural approaches work equally well in the presence of noise, while back-propagation does better when examples are incompletely specified. Back-propagation is better able to utilize a distributed output encoding, although ID3 is also able to take advantage of this representation style.

Download this report (PDF)


Return to tech report index

 
Computer Science | UW Home