Author: Addison Ellis
Category: Philosophy of Mind and Language
Word Count: 1000
Imagine: You direct your web browser to Google Translate. You select English as the input language and Chinese as the output language. You type ‘Horses are really cool!’ into the input field. In the blink of an eye, the output field reads ‘马是真的很酷!’ We know that the program running in the background takes input words and phrases in English, matches them with identical English words and phrases in its memory that are paired with Chinese, and outputs the proper Chinese words and phrases in the appropriate order. Now does this mean that Google Translate understands both English and Chinese well enough to perform the translation for you? That is, does it understand like a human translator understands these languages? Most people would say no, that while a translation has occurred, the program itself doesn’t understand anything at all.1
It seems as though whatever function or ability is carried out by the mind in language translation is also ultimately carried out by the process described in the Google Translate case. That is, we can say that the two processes are functionally equivalent. But we want to say that Google Translate lacks mentality in a very important way.2 What does this tell us about our minds? According to some philosophers, it tells us that while our minds have a special ability to understand the world, certain functional analogs of our minds do not.
Franz Brentano claimed that what distinguishes systems possessing mentality from the systems lacking mentality is a property called intentionality.3 Intentionality is the ability of the mind to be about things or to represent things to be thus-and-so. When I say “I see a horse about 200 meters from here,” I possess a mental state that represents my environment as containing a horse 200 meters from me. In this way, my mental state is also about that particular horse. So it seems that when I utter meaningful statements or think or understand meaningful thoughts, my mind is also exhibiting the property of intentionality.
In a sense, however, it does seem that certain of the Chinese characters in Google Translate are about horses. So if we want to use intentionality to draw a dividing line between the mental and the non-mental, we’ll need to introduce a further distinction. This is the distinction, suggested by John Searle, between two sorts of intentionality – original and derived.4
1. A Defense of the Original/Derived Distinction
Intentionality is original when it is intrinsic to a mind—i.e., when it is the result of the natural activities of that mind. While computers, languages, or individual symbols may stand for or represent the world, some philosophers believe that they do so only derivatively. That is, creatures like us program computers, lend meanings to words, and give meaning to symbols. Without us imbuing artifacts with intentionality, these things could not be said to possess it. In this sense, we can think of the intentionality of computers, languages, and symbols as a derived intentionality—i.e., it is an intentionality that is lent to artifacts. If artifacts have intentionality only insofar as it is derived from some other thing’s original intentionality, then artificial intentionality can be reduced to, or properly described in terms of the original intentionality of its creator.
Here’s an example to make the point clear:
Imagine that I happen upon a slab of rock on which there are some peculiar (although perfectly natural) markings, and I discover that the structure of these markings happens to map onto the actual structure of the town of Twin Peaks, WA.5 It would be a stretch to claim that the slab therefore really represents the town of Twin Peaks. After all, it was an enormous coincidence that the slab came to have those markings at all. However, let’s say that I make a stencil of the accidental “map” and give it to a friend of mine who is planning to visit Twin Peaks. Now it seems proper to claim that the markings really represent Twin Peaks. If this is so, then whatever power the markings have for representing Twin Peaks is dependent upon my intending them to be such a representation. Since maps do not intrinsically (i.e., on their own) represent anything, they only have derived, not original, intentionality. They can only be about the world only when we make them so.
2. Intentional Egalitarianism
In contrast to the above view, we should consider a popular alternative view that holds that there is no real distinction to be drawn between original and derived intentionality. Some who hold this view6 argue that because we are wholly physical creatures, products of evolution, there is no reason to think that our intentionality is any more special or primitive than that of artifacts. After all, if we are fully natural and physical creatures who are designed by nature to possess minds that can represent the world, then the design processes leading to our ability to understand and make meanings are analogous to the design processes we use when we create meaning-mongering artifacts. Therefore, according to this view, there is no distinction to be made between the intentionality possessed by minds like ours and that possessed by artifacts. There is only one kind of intentionality, and it is possessed by minds and computer programs like Google Translate alike.
3. Conclusion
The Google Translate thought experiment purportedly showed that understanding and meaning cannot be the product of purely mechanical activity. If this is true, then computers on their own cannot be intentional systems, and meaning and understanding are unique to creatures like us who have original intentionality. If, on the other hand, Intentional Egalitarianism is correct, then we ought to believe that all kinds of artifacts possess intentionality. It is extremely difficult to be confident about which side we should take in this debate. However, it should be clear that we need a robust theory of intentionality in our philosophical toolbox before we can even begin to answer fundamental questions in the philosophy of mind and language concerning meaning, content, and representation.
Notes
1This is an updated and simplified version of a famous thought experiment from John Searle known as the Chinese Room thought experiment. See Searle (1980).
2‘Mentality’ here is standing in for a cluster of concepts that also includes meaning, mindedness, and mental content, among others.
3See Brentano (1995).
4See Searle (op. cit) and Haugeland (2002).
5Philosophers would call the relation between the map and and the town that it is a representation of a relation of isomorphism. This term means that there is a one-to-one correspondence between each of the parts of the map and each of the parts of the town.
6See Dennett (1996).
References
Searle, John R. “Minds, Brains, and Programs.” Behavioral and Brain Sciences 3.03 (1980): 417-24.
About the Author
Addison Ellis is lecturer at the University of Illinois at Urbana-Champaign, where he earned his Ph.D. from. He holds an M.A. in Philosophy from the University of Colorado at Boulder and a B.A. in Philosophy and Psychology from the University of Illinois at Urbana-Champaign. He is currently interested in philosophy of mind (especially problems of intentionality), epistemology (especially the role of philosophical intuitions in philosophical practice), Kant, and post-Kantian philosophy. https://philpeople.org/profiles/addison-ellis