- For the letter sequence sometimes seen as a printer's error, see etaoin shrdlu.
||This article includes a list of references, related reading or external links, but its sources remain unclear because it lacks inline citations. (February 2010)|
|Original author(s)||Terry Winograd|
|Written in||Micro Planner, Lisp programming language|
|Type||natural language understanding|
SHRDLU was an early natural language understanding computer program, developed by Terry Winograd at MIT in 1968–1970. In it, the user carries on a conversation with the computer, moving objects, naming collections and querying the state of a simplified "blocks world", essentially a virtual box filled with different blocks.
SHRDLU was written in the Micro Planner and Lisp programming language on the DEC PDP-6 computer and a DEC graphics terminal. Later additions were made at the computer graphics labs at the University of Utah, adding a full 3D rendering of SHRDLU's "world".
SHRDLU was primarily a language parser that allowed user interaction using English terms. The user instructed SHRDLU to move various objects around in a the "blocks world" containing various basic objects: blocks, cones, balls, etc. What made SHRDLU unique was the combination of four simple ideas that added up to make the simulation of "understanding" far more convincing.
One was that SHRDLU's world was so simple that the entire set of objects and locations could be described by including as few as perhaps 50 words: nouns like "block" and "cone", verbs like "place on" and "move to", and adjectives like "big" and "blue". The possible combinations of these basic language building blocks were quite simple, and the program was fairly adept at figuring out what the user meant.
SHRDLU also included a basic memory to supply context. One could ask SHRDLU to "put the green cone on the red block" and then "take the cone off"; "the cone" would be taken to mean the green cone one had just talked about. SHRDLU could search back further through the interactions to find the proper context in most cases when additional adjectives were supplied. One could also ask questions about the history, for instance one could ask "did you pick up anything before the cone?"
A side effect of this memory, and the original rules SHRDLU was supplied with, is that the program could answer questions about what was possible in the world and what was not. For instance, SHRDLU would deduce that blocks could be stacked by looking for examples, but would realize that triangles couldn't be stacked, after having tried it. The "world" contained basic physics to make blocks fall over, independent of the language parser.
Finally, SHRDLU could also remember names given to objects, or arrangements of them. For instance one could say "a steeple is a small triangle on top of a tall rectangle"; SHRDLU could then answer questions about steeples in the blocks world, and build new ones.
Here is an extract from a famous demonstration of SHRDLU (with messages converted from their original upper case for ease of reading):
- Person: Pick up a big red block.
- Computer: OK.
- Person: Grasp the pyramid.
- Computer: I don't understand which pyramid you mean.
- Person (changing their mind): Find a block which is taller than the one you are holding and put it into the box.
- Computer: By "it", I assume you mean the block which is taller than the one I am holding.
- Computer: OK.
- Person: What does the box contain?
- Computer: The blue pyramid and the blue block.
- Person: What is the pyramid supported by?
- Computer: The box.
- Person: How many blocks are not in the box?
- Computer: Four of them.
- Person: Is at least one of them narrower than the one which I told you to pick up?
- Computer: Yes, the red cube.
The result was a tremendously successful demonstration of AI. This led other AI researchers to excessive optimism which was soon lost when later systems attempted to deal with more realistic situations with real-world ambiguity and complexity. Continuing efforts in the original SHRDLU stream have tended to focus on providing the program with considerably more information from which it can draw conclusions, leading to efforts like Cyc.
See also 
- Terry Winograd, "Procedures as a Representation for Data in a Computer Program for Understanding Natural Language", MIT AI Technical Report 235, February 1971
- Understanding Natural Language by T. Winograd, Academic Press, 1972