Recognizing, Understanding, Deciding Whether to Obey, and Executing Commands
This paper examines a programmed model (called DECIDER-1) that, 1)recognizes scenes of things, among which are a)objects, and b)words that form commands (or questions or other types of statements), 2)recognizes the import of these commands, 3)decides whether to obey one, and then 4)uses the command to guide the consequent actions, along with any necessary perceptual search. It uses the same mechanisms to handle a)the perceptual processes involved with recognizing objects and describing scenes, b)the linguistic processes involved with parsing sentences and understanding their meaning, and c)the retrieval processes needed to access pertinent facts in pertinent memory. This is in sharp contrast to most of today's systems, which receive the command through one channel, to be "understood" by one special-purpose set of routines, and perceive their environments through an entirely different channel. DECIDER-I continues to pattern characterize, parse symbol strings, and access facts implied by input questions until an action is chosen, because it is sufficiently implied by this search through the memory net. Then it executes the implied action. Possible actions include Naming, Describing, Finding, Moving, and Answering queries. The programmed model, DECIDER-I, is presented in EASEy (an Encoder for Algorithmic Syntactic English that's easy, Uhr, l973a) so that we can examine exactly what happens.
Download this report (PDF)
Return to tech report index