Tuesday, November 27, 2007


Human intelligence throughout most written history has been considered a pinnacle product of evolutionary (or divine) design; repeatedly invoked as “something that separates us” from the rest of nature in a very “higher than thou” sense. In the past century the thought that maybe our mental abilities are not nearly as mystical as originally supposed has gained a foothold in modern science and psychology. Parallels drawn between computers and human minds have forced scientists and philosophers alike to ask “can a computer understand?” and in turn “what is human understanding?” John R. Searle argues that not only is human understanding impossible for a computer, but that a computer can have no form of understanding at all (Searle: reprint Cooney 249-252). While there is a multitude of arguments both defending and attacking this claim, most hinge on just what human understanding is. Given these conditions, I will attempt to formulate my own definition of human understanding, and then examine Searle’s statement.
I’ll borrow an example structure put forward by Searle to start off this line of thought; we will examine three cases, a person, a dog, and a lawn, each of which are said to be “thirsty” (Searle 78). Yet that we can have a conversation with the child at all reveals that it does have some understanding. Without debating metaphors, by “thirsty” I mean that all three are in need of water, and before we ask does each understand, can they express this thirst? For the person there is a resounding yes, for our furry friend, though it will require more indirect communication, the answer still is yes. But what about the grass? If the grass is in dire need of water, let us say very thirsty, it will turn brown and seem to be expressing thirst though it is not aware of its impending doom. The origin of thirst in the human and dog is very similar; cells in their respective bodies react like the grass cells to the diminishing water supply, nerves and senses react, and the person or dog become mentally aware of a need for water and experience being thirsty. In most cases a man or dog would not wait until their skin was peeling to realize they were thirsty. The dog we can safely say “knows” it is thirsty, and specifically goes about either finding water or getting some assistance locating water. The person knows and understands his or her thirst the same as the dog, but can also as an adult know and understand the aforementioned conditions that bring about thirst, and beyond that they can discuss their thirst comparatively with a companion.
So understanding in itself is a form of knowing, and human understanding seems to involve levels of understanding beyond simply being aware. Though I should specify that I am referring to adult understanding, since explaining to a child that vegetables are good for them beyond their immediate health is nearly impossible; we don’t expect to converse with a five year old on just how various vitamins and minerals chemically benefit our well being in the long run. Yet that we can have a conversation with the child at all reveals that it does have some understanding. This is especially apparent when most basic ideas of right and wrong are taught through “how would it make you feel?” methods. But what understanding can a computer have? My laptop has at least as much “understanding” as my lawn, and would seem to be as aware as the average house pet since when it’s battery is getting low it will inform me that I need to plug it in to continue using it, rather than shutdown without warning. However, neither my dog nor my laptop can discuss with me why they need their respective energy sources.
Hypothetically I could converse with an advanced computer about a dog’s need for food or my laptop’s need for power to a level as in depth as with my friend Joe. But does it have any understanding of what it is saying to me, or what I am saying to it? According to Searle’s Chinese Room scenario, no (Searle: reprint Cooney 249-252); it is merely providing correct outputs for given inputs, much the same as it handles mathematical problems with rules and preprogrammed functions. Yet is that not in a way exactly what humans do? Given whatever someone says, your reaction will be compiled from digging through your own past experiences and formulating what you should say or do. And beyond that we can go into further detail about our reaction; me and Joe could infinitely break down a sentence starting with a discussion about Florida, moving to the various local plant life, to orange groves, to oranges, to fruits, to a healthy diet, to chemistry, to physics, and so on through the layers of understanding and interconnected knowledge. A computer could do the same if programmed with linked arrays of information and sub-lists within each higher idea or phrase. I imagine Searle would stop us short, “bup-bup-bup” waggling his finger, “the computer is still weaving through rules and preprogrammed information.” But do I have a real understanding of what I myself speak about? My knowledge of Japan is a rather removed one; I can imagine walking down a street in Tokyo, yet do I truly understand it all? But there is something I understand, I understand walking down a city street, I know what Japanese characters and people look like, and in turn I can build a mental representation of a Tokyo street scene. To understand anything I have not directly experienced I am forced to take my own experiences and indirectly adapt it to whatever information I am given.
So it is not understanding that the computer lacks, but a direct experience with whatever it is discussing with me, and it also lacks any single experience with which to relate a subject. I concede that a computer may never be capable of human understanding, to say that it has absolutely no understanding is a much larger leap because it is similar to saying a dog does not understand that it is thirsty. While a calculator does not experience numbers anymore than the poor soul locked in Searle’s Chinese Room understands Chinese, I feel that if we specifically built a robot that would have senses and work on those rather than merely inputs from a user, it would begin to at least have a form of experience. There will always be a difference between a mechanical or digital system and a biological one, however what is experience itself? My senses are compiled through my nervous system, and then encoded in biochemical electricity in short term memory before being processed either for immediate use or stored in long term memory. Every PC already does the second half of this function; given certain user inputs it will either work something over in RAM or store information to a hard-drive. Now if to that personal computer we add an array of senses and have it deciding what needs to be stored and what operated upon, it is now having a computer experience. And here we have a computer that understands; a complex and intelligent system capable of having experiences that is also able to learn, and more specifically adapt old experiences to new ones.


CITATION
Searle as reprinted in:
Cooney, B. (2000). "The Place of Mind". Belmont, CA, Wadsworth.

Searle, John R. (1994). "The Rediscovery of the Mind.", Cambridge, MA., MIT Press.

No comments: