Introduction Any discussion of the thinking of University of California-Berkeley professor, John R. Searle must include an understanding that a machine has the ability to “think” just because it has been fed the “correct” computer program that he calls “Strong AI” (artificial intelligence). However, he points out that “Strong AI” misses the basic point that any software program is simply a framework that designates the ways in which certain symbols are managed.
That manipulation cannot be, under any definition or circumstance, be considered actual thought. Searle uses what has come to be known as the “Chinese Room Argument. ” The Chinese Room Argument The premise of the Chinese room argument is that a person with absolutely no understanding of the Chinese language is placed in a room that has baskets full of Chinese symbols. He is given a book in English that supposedly identifies the symbols and explains that they are entirely identified and related to one another by their shapes.
As Searle explains how it works: “Suppose that unknown to you the symbols passed into the room are called ‘questions’ by the people outside the room, and the symbols you pass back out of the room are called ‘answers to the questions’”. (pp. 32). The point he makes is that he may hand out the appropriate and even accurate answers and that those responses may serve to connect with the expectations of those asking the questions. However, it does not indicate that any real understanding has taken place or that any sort of meaning is actually attached to the question and answer process that is taking place.
The point is that a person cannot possibly understand Chinese simply on the basis of running a computer program designed for understanding the language itself. It must be remembered that what has taken place is only a manipulation of conventional symbols that they have been designed to act upon. There are a number of factors that establish the parameters that demonstrate that this is not a reasonable explanation of what takes place between the human mind and the functions of the brain.
A computer simply processes information by changing that information into a series of symbolic representations and then manipulating those symbols in ways that provide information based on the rules that were created by the programmers. It is undeniably amazing that complex thoughts can be translated into an even more complex series of symbols. However, that is the process is taking place . . . not thinking or mental interaction. Reality as an Abstraction Searle is accurate in his assessment that “symbols and programs are purely abstract notions.
Because a machine can understand that certain symbols represent what is understood to be “dog” and the language can explain what a dog actually is, but it certainly does not mean that the computer understands what a dog is, what one looks like, how it feels when it licks a human hand and the interaction that has taken place in a person’s life with dogs. In other words, there is no reference to meaning or experience. The symbols of the program can stand for anything the programmer or the user wants. “To repeat, a computer has a syntax, but not semantics”. pp. 33). Just as the example of Searle’s Chinese room points out, without meaning, symbols have no purpose, value, or the potential for cognitive interaction. In short, they have no meaning. They are only symbols; but what they are symbols of cannot be known since, as already noted, they cannot be understood. It is the process of sending and receiving that results in understanding that is the true function of thinking, not the manipulation of pieces of data. So… can reality exist as an abstraction?
Only if the abstraction is created by a thinking entity that is able to conceive of what the concept of an “abstraction” might even be! The gulf between form and content cannot be adequately crossed with out an appropriate measure of mental processes that are directly and specifically related to the concept being addressed. In addition, the biological processes that take place during human thinking, emotional response, and cognitive understanding are a complex combination of the multi-faceted inputs the brain is receiving at any one point in time.
Such inputs range from the physical senses to emotional response, memory, and physiological reaction. The inputs varying and the processes vary to match and respond to those inputs. It is not a matter of pre-established reactions to pre-established symbols and patterns. As Searle accurately notes, any thinking person is certain to become bored with the process of shifting and shaping meaningless symbols. In fact, one may go so far as to say that such a process is “mind-numbing” and yet it is what takes place in the process of computer recognition and action.
Searle believes that, to a great degree, scientists and researchers have become so convinced that any human and/or natural process can be mapped or modeled and therefore, models are created for systems that are not appropriate for model representation. One of the best overall statements Searle makes regarding modeling and hypothetical standards applied to human processes is: “No one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down. Why on earth would anyone in his right mind suppose a computer simulation of mental processes actually had mental processes”? pp. 37). History has repeatedly shown that numerous scientific undertakings have essentially defeated themselves in their efforts to prove a theory via a model and subsequent generations have found that the entire foundation upon which the original premise was based were faulty. Thinking, understanding, knowledge, and related action are complicated processes that have relatively little to do with scientific modeling or the idea that such an esoteric process can be duplicated by silicon chips and electricity. BIBLIOGRAPHY Searle, John R. , “Can computers think? ” Minds, Brains, and Science, (The 1984 Reith Lectures), pp. 28-41.