Artificial Intelligence: The Possibility of Artificial Minds

Author: Thomas Metcalf
Categories: Philosophy of Mind and Language, Philosophy of Science
Word count: 1000

Many important philosophical discussions are about minds and persons.[1] We mainly think about these issues in the context of ourselves: human beings.

But an artificial intelligence or “AI” would be an entity that thinks or acts like an average human being—or even surpasses the average human being in cognitive abilities—yet is a machine or computer program.[2]

The idea of artificial intelligence is closely connected to several important philosophical discussions about the nature of minds. This essay is an introduction to philosophical thinking about artificial minds and AIs.

A cyber brain.
An illustration of a brain made of circuitry.

1. Types of Artificial Intelligence

There are many ways of describing AIs, but two of the most common are to distinguish AIs based on the types of tasks they can perform, and to distinguish them based on whether they actually have minds like humans’ minds.

Many computers can perform like humans at some task that normally requires complex thought or planning.[3] Thus, there may already be many examples of specific artificial intelligences: computer programs that can play chess or navigate to a destination.[4]

But an artificial general intelligence or “AGI” would have a wide variety of intellectual capabilities, probably meeting or surpassing most humans’ capabilities.[5]

We can also distinguish “strong” from “weak” AI: a strong AI would actually have a mind that’s essentially the same or greater in capabilities as a human mind, and a weak AI would only appear to have those capabilities.[6] Strong AIs would have genuine thoughts, feelings, and experiences.

As computing power increases and AIs become more sophisticated, there may be a point at which AIs’ behavior is indistinguishable from humans’.[7] Still, that would not entail that the AI was actually a conscious person.

2. Artificial Intelligence, Consciousness, and Personhood

What does it mean to say that something is conscious? Philosophers often describe consciousness in terms of first-person experiences, or what it is like to be some organism.[8] Arguably, there is something it is like, from the first-person perspective, to be a bat, or a horse, but there is nothing it is like to be a rock, or a mushroom.[9] So, adult humans, bats, and horses are conscious, but rocks and mushrooms are not.

We can also understand consciousness in terms of sensory experiences. Computers can certainly react to stimuli, but we don’t normally think they’re having experiences of them. By analogy, a thermostat can react to a temperature’s dropping below a certain setpoint, but the thermostat doesn’t feel cold.

In general, there is no consensus on whether an AI could ever be conscious, but relatively few philosophers are committed to theories that rule it out completely.[10]

It may be that consciousness necessarily requires some kind of biological basis, such as brain cells. Try to imagine whether a system of metal and plastic pulleys, levers, and gears could have conscious experiences.[11] If it seems obvious that no such system could be conscious, no matter how complex, then perhaps it may seem that metal and plastic, at least, could not be the site of consciousness.[12]

Yet many philosophers would argue that if something can be coherently imagined, then that’s evidence that it’s possible.[13] So, if you can imagine a computer or a robot from science fiction having experiences, then that would be evidence that a strong AI (with conscious experiences) is possible.

Some philosophers argue that computers, by their very nature, could not engage in conscious thought nor understanding. One example of such an argument is the “Chinese Room Argument,”[14] according to which computers are merely manipulating symbols according to rules, without any intentional understanding of what the symbols mean.[15] By analogy, you might not know a foreign language, but you could act like it, for example, by running strings of text through a translator website. Even if you could thereby speak in a way that was indistinguishable from a native speaker, arguably, you’re still not understanding the language. The argument holds that computers “speak” languages in that way: manipulating symbols without true understanding.

However, the most popular theory of consciousness (although still a minority viewpoint)[16] is that mental states are defined by the functional roles they play in some cognitive system.[17] According to this theory, functionalism, in principle, a computer’s “experiences” could play the right role with respect to its other “mental” features so that the computer would be conscious.[18]

If an AI were conscious, it might be self-aware, and it might easily be far more rational than the average human.[19] Thus, AIs might also be persons (in a morally important, psychological sense),[20] and we would need to think about whether they have moral and legal rights.[21]

3. Identifying Artificial Consciousness

Even if we decide computers could be conscious, that would not tell us how to learn or verify that some computer was actually conscious. Many things act conscious but aren’t: a character in a realistic video game might cry out in pain, but no one is consciously feeling anything.[22]

A famous conception of a test to decide whether something is an AI is the Turing Test. In a standard version of the test, humans chat with the computer program in question, without being told that it is a computer program. If the humans can’t recognize more than 50% of the time that they’re actually talking to a computer (rather than to a real human), the computer passes the test.

Of course, it’s questionable whether passing the Turing Test establishes that a computer program is truly a strong AI.[23] As noted, something can act conscious but not yet be conscious.

It’s not easy to know whether some AI is actually conscious, but it is notoriously difficult to be certain that other humans are conscious, anyway.[24] To conclude that other humans are conscious, we can observe their behavior, but many AIs today can behave in fairly human-like ways.[25]

4. Conclusion

There are many other important philosophical issues related to AIs, especially in the ethics of how we use them.[26] We should expect that philosophical analysis will continue to be fruitful as we see what the future holds for AIs and ourselves.

Notes

[1] See Thomas Metcalf, What is Philosophy?

[2] Bringsjord & Govindarajulu (2023); Hauser (n.d.). See also Russell & Norvig (2010, § I.1) for an introduction to the concept. We normally think of AIs built by humans, but of course, they could be built by extraterrestrial beings or by other AIs; see below.

[3] Bringsjord & Govindarajulu (2023, § 1) provide a brief outline of the history of thinking about AIs. For example, chess-playing and Jeopardy!-playing computer programs are commonly considered AIs, even though they’re not fully general.

[4] Such AIs have existed for a long time already. Anyoha (2017) provides a useful history of artificial intelligence.

[5] Strictly speaking, what makes the AI “general” is the breadth of its abilities, not that it is far more intelligent than humans. But in most discussions, it tends to be assumed that the AGI will be better than humans at these tasks, presumably because specific AIs tend to be much better than most humans at their specific tasks. See Bringsjord & Govindarajulu (2023, § 5) for a general discussion of AGI. See Heath (2024) for a report on a current effort to build AGI. And if an AGI develops the ability to build even-more-intelligent AGIs, then the capabilities of AIs may increase exponentially; this hypothetical situation is often known as the “singularity.” The idea is that each new, superintelligent AI would use its capabilities to build an even-more-intelligent AI. Chalmers (2010) provides an extensive discussion of the idea and implications of such a singularity.

[6] Nowadays, some of the most commonly used software that could be considered an AI is the large language model. These programs can speak like humans and appear to understand human languages by predicting words and statements that, based on the AI’s training (commonly, “reading” sentences and pages of text), are plausibly something that a human would say that point (Google, n.d.; IBM, n.d.). As far as we know, these are examples of weak AIs.

[7] Bringsjord & Govindarajulu (2023, § 8.1), for example, regard weak AI as obviously possible.

[8] The landmark article is Nagel (1974). See also van Gulick (2023, § 4) for discussion of the phenomenal or qualitative content of consciousness.

[9] Van Gulick (n.d., § 1) regards this as the most fundamental and commonly used notion of consciousness.

[10] For an introduction to theories of the mind, see Jacob Berger, The Mind-Body Problem: What are Minds? Presumably, eliminativism would rule out the possibility of conscious AIs, as would type-type identity theory, but those positions are questionable (Bourget & Chalmers, n.d.a; Van Gulick, 2023, § 8.2). Note that if we discover that computer programs can be conscious, then one important implication might be that some forms of skepticism about the external world are much more plausible than before. After all, if computer programs can be conscious, then we may be computer programs in a simulated environment. Bostrom (2003) argues that if computer programs can be conscious, conscious computer-programs who believe themselves to be biological humans may greatly outnumber genuine biological humans. See also Andew Chapman, External World Skepticism.

[11] This example is a version of one borrowed from Leibniz; see van Gulick (2023, § 1). See Hauser (n.d., § 4.c.iii) on this sub-debate about whether computers can feel or have experiences.

[12] A sizeable minority of philosophers hold that the mind is not a physical object (Bourget & Chalmers, n.d.b). See also Jacob Berger, The Mind-Body Problem: What are Minds? If the mind is not a physical object, then merely constructing physical objects that act like brains may not be sufficient to produce conscious minds. However, this doesn’t actually tell us whether AIs can be conscious, because it doesn’t tell us whether arranging physical objects in the right way would produce (non-physical) minds. See also Gennaro (n.d.) on consciousness, especially § 6.

[13] See Kirk (2023, § 5) on whether conceivability or imaginability is evidence of possibility. See Bob Fischer, Modal Epistemology: Knowledge of Possibility and Necessity.

[14] See Cole (2023).

[15] In the original argument, a person sits in an opaque room and is passed sentences, on a piece of paper, in a Chinese language such as Mandarin. The person looks at the sentence (a string of symbols), finds it in a book next to a corresponding set of symbols, writes the corresponding symbols on a paper, and passes them back out. The person in the room did not grow up speaking Mandarin and has never taken a course in it, nor ever heard it spoken aloud, nor seen it written before, nor ever met a person who speaks Mandarin. But to a person outside the room, it appears as if the person in the room speaks Mandarin fluently. Yet we are supposed to conclude that the person in the room does not understand Mandarin, and that computers in general “speak” languages in a similar way, without actually understanding the language. For more on the Chinese Room, see, e.g. Cole (2023). An interesting related question is how we could decide, from our perspective, whether some computer or machine was conscious; see e.g. Borderline Consciousness (n.d.). For more on intentionality, see Addison Ellis, Intentionality.

[16] See Bourget & Chalmers (n.d.a), according to whom 33.04% of philosophers accept or lean towards functionalism (cf. Gennaro 2023, § 3.b.v; Levin, 2023). If functionalism about consciousness is true, then a creature that was physically very different from a biological animal could still be conscious.

[17] See Levin (2023) for an extended discussion of functionalism, according to which minds or mental states are defined by the roles they play in cognitive systems. Also, some philosophers believe that there is consciousness pervading the natural world, even in what we would normally consider inanimate objects, such as the individual atoms in the elements that typically compose computer chips; see Goff et al. (2023) and Skrbina (n.d.).

[18] For example, if a robot got damaged, it could form the “belief” that something dangerous was nearby, and the “intention” to move away from the danger. Of course, a common criticism of functionalism is that a being could have such “experiences” (defined functionally) without actually having the kinds of conscious, first-person, subjective experiences that we consider paradigmatic of consciousness. See Levin (2023, § 5.5).

[19] Computers might be more epistemically rational in the sense that they have a much higher proportion of true to false beliefs than the average person, and they might be more instrumentally rational in the sense that they make decisions that are better aligned with their goals. On epistemic rationality, see Todd Long, Epistemic Justification, and Watson (n.d.), and on instrumental rationality, see Kolodny & Brunero (2023). And if some AIs are persons, they might then have moral rights (See Warren (1973, § II.2) for an example of how persons might have moral rights.)

[20] For some discussion of persons and ethics, see Shoemaker (2023), especially §§ 2.2 and 7. See also Warren (1973, § II.2) for a landmark discussion of the moral implications of considering something to be a person.

[21] In turn, if some AIs are persons, then this would imply that they have many philosophically interesting features. See, for example, Baker (2000).

[22] In fact, many philosophers believe that something could be physically just like a human and yet not be conscious. Such a “zombie” might be quite a bit like an AI. See Bourget & Chalmers (n.d.c) on philosophers’ opinions about zombies, and Kirk (2023) for an extensive introduction to zombies.

[23] The landmark article about the Turing Test was Turing (1950); see also Oppy & Dowe (2023). Arguably, some programs have already passed the Turing Test, but some have argued that this simply shows the limitations of the Test (Oremus, 2022).

[24] In philosophy, the “Problem of Other Minds” is the problem of whether and how we know that minds other than our own exist. See Avramides (2023, § 1) and Thornton (n.d.). Arguably, while we can perceive other people’s physical bodies and brains, we do not directly perceive their mental states, so there is room to question whether those mental states exist at all. A complicated question is what kind of evidence we might have for belief in others’ minds; see the essays in Epistemology.

[25] Arguably, a simple version of the Turing Test had already been passed half a century ago; see Weizenbaum (1966) and Oppy & Dowe (2023, § 1).

[26] Coeckelburgh (2020) is an accessible introduction. See also Gordon & Nyholm (n.d.) and Müller (2023). See also van Wynsberghe (2021) and the essays in Voeneky (2022).

References

Anyoha, R. (2017). Can machines think? SITN.

Avramides, A. (2023). Other minds. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Baker, L. R. (2000). Persons and bodies: A constitution view. Cambridge University Press.

Borderline Consciousness. (n.d.). Borderline consciousness. Borderlineconsciousness.com.

Bostrom, N. (2003). Are we living in a computer simulation? The Philosophical Quarterly, 53(211), 243–255.

Bourget, D. & Chalmers, D. (n.d.a). Survey results: Consciousness: identity theory, panpsychism, eliminativism, functionalism, or dualism? Survey2020.philpeople.org.

Bourget, D. & Chalmers, D. (n.d.b). Survey results: Mind: Physicalism or non-physicalism? Survey2020.philpeople.org.

Bourget, D. & Chalmers, D. (n.d.c). Survey results: Zombies: inconceivable, conceivable but not metaphysically possible, or metaphysically possible? Survey2020.philpeople.org.

Bringsjord, S. & Govindarajulu, N. S. (2023). Artificial intelligence. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Chalmers, D. J. (2010). The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9–10), 7–65.

Coeckelbergh, M. (2020). AI ethics. MIT Press.

Gennaro, R. J. (n.d.). Consciousness. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Goff, P., Seager, W., & Allen-Hermanson, S. (2023). Panpsychism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Google. (n.d.). Introduction to large language models. Developers.google.com.

Gordon, J. & Nyholm, S. (n.d.). Ethics of artificial intelligence. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Hauser, L. (n.d.). Artificial intelligence. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Heath, A. (2024, January 18). Mark Zuckerberg’s new goal is creating artificial general intelligence. The Verge.

IBM. (n.d.). What are large language models? IBM.com.

Kolodny, N. & Brunero, J. (2023). Instrumental rationality. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.

Levin, J. (2023). Functionalism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Müller, V. C. (2023). Ethics of artificial intelligence and robotics. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435–450.

Oppy, G. & Dowe, D. (2023). The Turing Test. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Oremus, R. (2022, June 17). Google’s AI passed a famous test — and showed how the test is broken. The Washington Post (17 June 2022 ed.).

Russell, S. & Norvig, P. (2021). Artificial intelligence: A modern approach (4th ed.). Pearson.

Shoemaker, S. (2023). Personal identity and ethics. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Skrbina, D. (n.d.). Panpsychism. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Thornton, S. P. (n.d.). Solipsism and the problem of other minds. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236),  433–460.

Van Gulick, R. (2023). Consciousness. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).

Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethics, 1, 213–218.

Voeneky, S., Kellmeyer, P., Mueller, O., & Burgard, W. (eds.). (2022). The Cambridge handbook of responsible artificial intelligence: Interdisciplinary perspectives. Cambridge University Press.

Warren, M. A. (1973). On the moral and legal status of abortion. The Monist, 57(1), 43–61.

Watson, J. C. (n.d.). Justification, epistemic. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.

Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45.

Related Essays

The Mind-Body Problem: What are Minds? by Jacob Berger

Intentionality by Addison Ellis

Modal Epistemology: Knowledge of Possibility and Necessity by Bob Fischer

Epistemic Justification by Todd R. Long

External World Skepticism by Andrew Chapman

What is Philosophy? by Thomas Metcalf

PDF Download

Download this essay in PDF

About the Author

Tom Metcalf is an associate professor at Spring Hill College in Mobile, AL. He received his PhD in philosophy from the University of Colorado, Boulder. He specializes in ethics, metaethics, epistemology, and the philosophy of religion. Tom has two cats whose names are Hesperus and Phosphorus. shc.academia.edu/ThomasMetcalf

Follow 1000-Word Philosophy on Facebook, Twitter, and Instagram and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com.