Author: Thomas Metcalf
Category: Ethics, Philosophy of Mind and Language, Social and Political Philosophy
Word count: 998
Artificially intelligent entities or “AIs” are computer programs or machines that, in some important ways, can communicate, think, feel, reason, or act in ways similar to how humans do.
“Weak” AIs merely act or appear as if they have humanlike mental states, but “strong” AIs would genuinely engage in those mental activities.[1]
AIs hold enormous potential for the future.[2] But the existence of weak AIs and the possibility of strong AIs both raise important ethical questions. This essay introduces some of the most urgent moral and societal issues related to AIs.

1. How We Treat AIs
Most philosophers think that merely being human is not necessary for something to have moral rights or be morally important.[3] For example, it is possible to commit a moral wrong against a nonhuman animal, say, by kicking a puppy.[4]
So, instead of resting moral rights on something’s being human, many philosophers argue that it’s possible to commit a wrong against a creature only if it has conscious experiences, meaning that it can perceive and be aware.[5]
So the question of whether an AI might be conscious, or ever will be, is vital for answering the question of whether an AI would have moral rights or whether we would have moral obligations to it. If we do, then we would have to ask whether it might count as murder to delete or permanently shut down an AI.[6] And the question would arise of whether we owe AIs political rights, such as the right to vote and the right of free speech.[7] Forcing an AI to work, if it has moral rights, might be like slavery.[8]
But some philosophers have argued that computers, by their nature, cannot genuinely think or understand.[9] If we only ever create weak AIs, then for the reasons surveyed above, we could worry a lot less about what we might owe them morally.
2. How AIs Treat Us
Human wellbeing is very morally valuable, so it is important to ask how AIs could benefit or harm humans.
Weak AIs already benefit humanity in many ways. This is obvious simply from the popularity of AI software, such as digital assistants (Alexa, Google, and Siri)[10] and chatbots and conversational agents such as ChatGPT.[11] AIs often make our lives easier.
Malevolent AIs are a popular topic in fiction. For example, two of the best-known science-fiction franchises in history—the Matrix series and the Terminator series[12]—prominently feature AIs that attempt to destroy humanity. AIs might attack humans for the same reasons that humans attack other species and each other: to acquire resources and to protect themselves.[13]
AIs might also unintentionally harm humanity. They might attempt to achieve some other goal—even a goal that ostensibly benefits humanity—but do so in a way that’s ultimately harmful. For example, an AI that was instructed to end all disease, pollution, or war might simply attempt to kill all biological organisms.[14] People who study AI sometimes refer to the “alignment problem” as the problem of ensuring that any AI acts in ways aligned with our interests—that they do not pursue projects that harm humanity.[15]
3. How Humans Treat Each Other
The most urgent ethical issue is how AI will affect humans’ interactions with each other. After all, it remains to be seen exactly what AIs will be created and what their capabilities will be, but we already know that humans don’t always meet our moral obligations to each other.[16]
We can use AIs in order to improve each other’s lives in various ways. AIs have already helped solve difficult problems in biology,[17] and led to the discovery of valuable medicines.[18] They could even help invent new sources of clean energy.[19]
Yet we can also use AIs to harm other humans, intentionally or unintentionally. Many nation-states are developing AI-based weaponry.[20]
On the smaller scale, humans can use AIs to cheat or defraud other humans, for example by using deceptive chatbots,[21] creating misleading images or video,[22] or committing plagiarism.[23] AI use may also allow people to depersonalize some of the emotionally signficant and meaningful activities that are part of close interpersonal relationships, and even engage in emotional relationships with AIs rather than forming valuable relationships with human beings.[24]
As for unintentional harms, AIs may be biased against vulnerable groups.[25] For example, if an AI is trained based on a body of real-life written work, then if some of the written work is racist, then the AI may produce racist output. Similarly, if an AI is trained on data in which certain groups are underrepresented, then the algorithms that the AI ends up using may harm members of these groups.[26]
For another example, building and operating AIs can be very resource-intensive. Those projects require a lot of electricity and certain minerals.[27] When that electricity is generated by burning fossil fuels, that has the potential to harm anyone on Earth, especially people in poor countries—who are also more likely to suffer from the environmental harms of mining.[28]
Widespread availability of AIs may also intensify economic inequality. Rich people and countries can use AIs to increase their own wealth further.[29]
Relatedly, as AIs increase in capabilities, they may start to replace humans at many jobs. This could cause widespread unemployment and corresponding harms.[30] And there are also moral concerns about the jobs that AIs, or the production of AIs, actually do create, even when those jobs are intended to make AIs ethically better. Such jobs may be dangerous or exploitative.[31]
For these reasons, many companies that use or produce AIs are interested in presenting an image to the world that they are socially conscious of the potential harms of AIs.[32] Yet some of these companies may only wish to appear aware of the moral complications of AIs.[33]
4. Conclusion
AIs have the potential to produce unimaginable benefits and harms to humanity, many of which haven’t yet been predicted. It would be reckless to proceed in the development and use of AIs without very careful consideration of their potential benefits and harms.
Notes
[1] Bringsjord & Govindarajulu (2023); Hauser (n.d.). One may question exactly which types of computers count as AIs; see Thomas Metcalf, Artificial Intelligence: The Possibility of Artificial Minds, § 1. Today, the most-discussed AIs (and the programs that it is least controversial to regard as artificially intelligent) tend to be machine-learning programs that absorb and digest enormous amounts of data and, based on those data, attempt to behave in humanlike ways (or enhanced humanlike ways). For more on machine learning, see Brown (2021).
[2] For example, they may make our work much more efficient, and create inventions and discoveries that ordinary humans couldn’t achieve (Rotman, 2019; University of Cambridge, 2024). We return to this topic in Section 2 below.
[3] See Jonathan Spelman, Theories of Moral Considerability.
[4] See Gruen (2023). See also Jason Wyckoff, The Moral Status of Animals; Dan Lowe, Speciesism; and Jacob Berger, The Mind-Body Problem. As for the empirical claim, many philosophers are vegetarians or vegans (PhilPapers, n.d.).
[5] Again, see Jonathan Spelman, Theories of Moral Considerability. See also Varner (2000). For one thing, if an AI is not conscious, then according to at least one influential theory of well-being, it would be impossible to harm the AI. If mainline hedonism about well-being is true, then creatures that cannot have conscious experiences cannot be harmed nor benefited (Moore, 2023). Other theories of well-being may also entail that non-conscious entities cannot be harmed nor benefited; see Crisp (2023). See also Kiki Berk, Happiness: What is it to be Happy? It’s also worth noting that the standard theories of moral considerability do not have much to say about whether an AI would be morally considerable, but arguably, theories that consider nonhuman animals to be morally considerable would consider conscious AIs to be morally considerable. See Gruen (2023) and Jonathan Spelman, Theories of Moral Considerability. See also Dan Lowe, Speciesism.
[6] See Coeckelbergh (2020, p. 54 ff.) and Müller (2023, § 2.9.2), and Hatmaker (2017) for an interesting example. If it becomes possible to create AIs that have experiences, then it might become very easy for a malevolent human to torture billions of conscious beings, and it might become correspondingly easy to generate enormous amounts of ethical value by creating billions of simple AIs that can feel pleasure. Indeed, it might be much cheaper to generate ethical value by creating hundreds of simple AIs than it is by creating one biological human. See Jonathan Spelman, The Repugnant Conclusion, and Shane Gronholz, Consequentialism and Utilitarianism. Some of these issues are also related to the topic of longtermism; see Dylan Balfour, Longtermism: How much should we care about the far future? Such complexities have led some philosophers to call for a ban on “synthetic phenomenology,” i.e., a ban on AIs that have conscious experiences (Bentley et al., 2018, pp. 28–29). See also n. 9 below.
[7] Our decision here may depend on why, if at all, we think that people ought to have the right to vote. If it’s because we think democracy produces the best outcomes, then we would allow AIs to vote if we think that their voting produces good outcomes. Given that AIs would probably have access to much more knowledge than the average human voter does, that’s some reason to expect better outcomes. If we think people ought to be able to vote because we think that people who are affected by laws ought to have the right to affect those laws, then AIs might have the right to vote in some, but not all, decisions. See, for example, Brennan (2023, § 6), and Thomas Metcalf, Ethics and the Expected Consequences of Voting.
[8] For more about slavery, see Dan Lowe, Aristotle’s Defense of Slavery. Still, we can imagine intentionally creating an AI that likes to work. There is also a philosophical debate about whether it is possible to harm or wrong something by bringing it into existence, if its life is worth living. One might argue that creating AIs in order to perform certain tasks, as long as the AIs’ “lives” are worth living, does not harm the AIs, because they would not otherwise have existed. See Haramia (2014) and Duncan Purves, The Non-Identity Problem. But if we acquire the ability to create many copies of conscious AIs that can have good lives, then according to some moral theories, it might be morally good (or even morally required) to create as many copies as possible, even at substantial cost to us. (See Jonathan Spelman, The Repugnant Conclusion.) Plausibly, for the cost of sustaining one good human life, we might be able to sustain millions or billions of good AI lives. But most philosophers would argue that there are other valuable things in the world than good lives. See also n. 5 above.
[9] See Bringsjord & Govindarajulu (2023, § 8) for some discussion on whether machines can be conscious. The landmark paper in this vicinity is Searle’s “Chinese Room” argument (Cole, 2023).
[10] Hoy (2018) provides an introduction to voice assistants.
[11] For an introduction to large language models such as ChatGPT, see Ruby (2023).
[12] See Blunden (2016). The popular, recent Battlestar Galactica series also stars malevolent AIs as the main antagonists, and they feature prominently in the Dr. Who franchise as well.
[13] See Daniel Weltman, “Nasty, Brutish, and Short”: Hobbes on Life in the State of Nature. If AIs are defined in part by their similarities to humans, then presumably, they would attack us for some of the same reasons that we would attack each other.
[14] This problem is sometimes called the problem of “Perverse Instantiation.” Basically, the AI has a goal, and it instantiates its goal in a perverse way: extremely harmfully, or in a way that sacrifices the original purpose of the goal. See, for example, Danaher (2014) and Bostrom (2014, p. 120 ff.). A prominent example in fiction is the 2015 movie Avengers: Age of Ultron (Whedon, 2015).
[15] See, for example, Gent (2023) and Leike et al. (2022). Some authors have argued against attempting to program AIs to engage in moral reasoning (van Wynsberghe and Robbins, 2019).
[16] Indeed, some authors argue that worrying about malevolent or dangerous AIs as an existential risk to humanity is a distraction from the present-day risks we face because of AI. See, for example, Milmo (2023).
[17] Callaway (2022).
[18] Smith et al. (2021). More generally, see University of Cambridge (2024).
[19] See Rotman (2019) for a discussion of AIs’ potential roles in invention.
[20] See Dresp-Langley (2023) for an introduction to AI weaponry.
[21] Use of chatbots for cheating is a big problem in higher education; see, e.g., Spector (2023).
[22] “Deepfakes” are extremely convincing, deceptive videos or images that appear to be real photos or videos of real people. They are typically created with AI assistance. See Jones (2023).
[23] Kwon (2024).
[24] Weirich (2023); see also Lam (2023a and 2023b).
[25] See, for example, Coeckelbergh (2020, p. 125 ff.) and Müller (2023, § 2.4).
[26] On this “algorithmic bias,” see, e.g., Friis and Riley (2023).
[27] For introductions to environmental issues about the sustainability of AI, see van Wynsberghe (2021) and OECD (2022). See also Brevini (2021) and Bolte et al. (2022). As an example, constructing an AI can produce CO2 emissions equivalent to several cars over the cars’ lifetimes (Strubell et al., 2019), and use of a popular AI service—ChatGPT—can consume as much electricity as 33,000 U.S. households (McQuate, 2023). While that is only currently equivalent to 0.02% of the households in the United States (U.S. Census Bureau, n.d.), we should expect AI usage to increase as more AI models are developed, they gain in capabilities, and they become more integrated into other areas of life.
[28] See Guivarch et al. (2021) and Leichenko & O’Brien (2008) on how climate change affects the poor. See Penke (2021) on mining raw minerals.
[29] See Burkhardt (2019) and Bushwick (2023). Yet as with other capital goods, increasing productivity may actually improve the absolute position of the relatively poor. See Thomas Metcalf, Arguments for Capitalism and Socialism.
[30] See Coeckelbergh (2020, p. 137 ff.), Müller (2023, § 2.6), and Milmo (2024).
[31] One example is “ghost work”: work that’s actually performed by a human, but seems to be performed, or is purported to be performed, by an automated process. See Gray and Suri (2019). One might also argue that people employed in training the AIs are commonly exploited; see Williams et al. (2022). In fact, the attempt to fight algorithm bias may produce exploitative or otherwise morally questionable jobs; see Perrigo (2023).
[32] For an introduction to “ethics washing” about AI, see Kaspersen & Wallach (2021). That term follows the better-known term “greenwashing,” which refers to when agents wish to appear environmentally conscious or friendly, but make false or misleading claims in order to hide environmental harm (United Nations, n.d.).
[33] Still, the fact that consumers do care about ethics (Allen, 2021; Barton, 2018) means that at least some consumers will do enough research that mere “ethics washing” won’t provide the same benefits that genuinely ethical behavior does.
References
Allen, S. (2021, October 4). Yes, consumers care if your product is ethical. KelloggInsight.
Alonso, C., Kothari, S., & Rehman, S. (2020, December 2). How artificial intelligence could widen the gap between rich and poor nations. IMF Blog.
Barton, R. (2018, December 5). From me to we: The rise of the purpose-led brand. Accenture Strategy.
Bentley, P. J., Brundage, M., Häggström, O., & Metzinger, T. (2018). Should we fear artificial intelligence? In-depth analysis. European Parliament.
Blunden, F. (2016). The 20 greatest sci-fi franchises of all time. Screenrant.com.
Bolte, L., Vandemeulebroucke, T., & van Wynsberghe, A. (2022). From an ethics of carefulness to an ethics of desirability: Going beyond current ethics approaches to sustainable AI. Sustainability, 14, 4472.
Bourget, D. & Chalmers, D. (n.d.). Eating animals and animal products. Survey2020.philpeople.org.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Bostrom, N. & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (eds.), The Cambridge Handbook of Artificial Intelligence (Cambridge University Press), pp. 316–334.
Brennan, J. (2023). The ethics and rationality of voting. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Brevini, B. (2021). Is AI good for the planet? Polity Press.
Bringsjord, S. & Govindarajulu, N. S. (2023). Artificial intelligence. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Brown, S. (2021). Machine learning, explained. MIT Management Sloan School.
Burkhardt, M. (2019). The impact of AI on inequality, job automation, and skills of the future. Towards Data Science.
Bushwick, S. (2023, August 1). Unregulated AI will worsen inequality, warns Nobel-winning economist Joseph Stiglitz. Scientific American.
Callaway, E. (2022, April 13). What’s next for AlphaFold and the AI protein-folding revolution. Nature (13 April 2022 ed.).
Coeckelbergh, M. (2020). AI ethics. MIT Press.
Coldewey, D. (2023, April 1). Ethicists fire back at “AI Pause” letter they say “ignores the actual harms.” TechCrunch.
Cole, D. (2023). The Chinese Room argument. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Crisp, R. (2023). Well-being. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Danaher, J. (2014). Bostrom on superintelligence (4): Malignant failure modes. Philosophical Disquisitions.
Danaher, J. (2019). Automation and utopia: Human flourishing in a world without work. Harvard University Press.
Dresp-Langley. (2023). The weaponization of artificial intelligence: What the public needs to be aware of. Frontiers in artificial intelligence, 6, 1154184.
Friis, S. & Riley, J. (2023, September 9). Eliminating algorithmic bias is just the beginning of equitable AI. Harvard Business Review.
Gent, E. (2023). What is the AI alignment problem and how can it be solved? New Scientist.
Google. (n.d.). Introduction to large language models. Developers.google.com.
Gordon, J. & Nyolm, S. (n.d.). Ethics of artificial intelligence. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.
Gruen, L. (2023). The moral status of animals. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Guivarch, C., Taconet, N., & Méjean, A. (2021). Linking climate and inequality. International Monetary Fund.
Haramia, C. M. (2014). Roles and responsibilities: Creating moral subjects. [Unpublished doctoral dissertation]. University of Colorado, Boulder.
Hauser, L. (n.d.). Artificial intelligence. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.
Hoy, M. B. (2018). Alexa, Siri, Cortana, and more: An introduction to voice assistants. Med Ref Serv Q, 37(1), 81–88.
Jones, N. (2023). How to stop AI deepfakes from sinking society — and science. Nature.
Kaspersen, A. & Wallach, W. (2021, November 10). Why are we failing at the ethics of AI? Carnegie Council for Ethics in International Affairs.
Knell, S. & Rüther, M. (2023). Artificial intelligence, superefficiency and the end of work: a humanistic perspective on meaning in life. AI and Ethics.
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Viking.
Kwon, D. (2024, July 30). AI is complicating plagiarism. How should scientists respond? Nature (30 July 2024 ed.).
Lam, B. (2023a, April 11). Digital future of grief. Hi-Phi Nation (11 April 2023 ed.).
Lam, B. (2023b, April 25). Love in time of Replika. Hi-Phi Nation (25 April 2023 ed.).
Leike, J., Schulman, J., & Wu, J. (2022, August 24). Our approach to alignment research. OpenAI.
Leichenko, R. & O’Brien, K. (2008). Environmental change and globalization: Double exposures. Oxford University Press.
McQuate, S. (2023, July 27). Q&A: UW researcher discusses just how much energy ChatGPT uses. UW News.
Milmo, D. (2023, October 29). AI doomsday warnings a distraction from the danger it already poses, warns expert. The Guardian.
Milmo, D. (2024, January 15). AI will affect 40% of jobs and probably worsen inequality, says IMF head. The Guardian.
Moore, A. (2023). Hedonism. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
Müller, V. C. (2023). Ethics of artificial intelligence and robotics. In E. N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2023 ed.).
OECD. (2022). Measuring the environmental impacts of artificial intelligence compute and applications: The AI footprint. OECD Digital Economy Papers, 341.
Ord, T. (2020). The precipice: Existential risk and the future of humanity. Hachette.
Penke, M. (2021, April 13). The toxic damage from mining rare elements. DW.
Perrigo, B. (2023, January 18). Exclusive; OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time.
Reynolds, E. (2018). The agony of Sophia, the world’s first robot citizen condemned to a lifeless career in marketing. Wired UK.
Richards, B., Agüera y Arcas, B., Lajoie, G., & Sridhar, D. (2023, July 18). The illusion of AI’s existential risk. Noēma.
Roose, K. (2023, May 30). A.I. poses “risk of extinction,” industry leaders warn. The New York Times.
Rotman, D. (2019, February 15). AI is reinventing the way we invent. Technology Review.
Skrbina, D. (n.d.). Panpsychism. In J. Fieser & B. Dowden (eds.), The Internet Encyclopedia of Philosophy.
Smith, D. P. et al. (2021). Expert-augmented computational drug repurposing identified baricitinib as a treatment for COVID-19. Frontiers in Pharmacology, 12, 709856.
Spector, C. (2023, October 31). What do AI chatbots really mean for students and cheating? Stanford Graduate School of Technology Research Stories.
Strubell, E., Ganesh, A., & McCallum, A. (2019, June 5). Energy and policy considerations for deep learning in NLP. ArXiv.
United Nations. (n.d.). Greenwashing – the deceptive tactics behind environmental claims. United Nations Climate Action.
University of Cambridge. (2024). Accelerating how new drugs are made with machine learning. Phys.org.
U.S. Census Bureau. (n.d.). QuickFacts: United States. U. S. Census Bureau.
Van Wynsberghe, A. (2021). Sustainable AI: AI for sustainability and the sustainability of AI. AI and Ethics, 1, 213–218.
Van Wynsberghe, A. & Robbins, S. (2019). Critiquing the reasons for making artificial moral agents. Sci Eng Ethics, 25(3), 719–735.
Varner, G. (2000). Sentientism. In D. Jamieson (ed.), A Companion to Environmental Philosophy (Oxford), pp. 192–203.
Weirich, K. (2023, March 22). ChatGPT and emotional outsourcing. Prindle Post (22 March 2023 ed.).
Whedon, J. (Director). (2015). Avengers: Age of Ultron. [Film]. Marvel Studios.
Williams, A., Miceli, M., & Gebru, T. (2022, October 13). The exploited labor behind artificial intelligence. Noēma.
Related Essays
Artificial Intelligence: The Possibility of Artificial Minds by Thomas Metcalf
The Mind-Body Problem: What Are Minds? by Jacob Berger
Theories of Moral Considerability by Jonathan Spelman
Speciesism by Dan Lowe
The Moral Status of Animals by Jason Wyckoff
Distributive Justice: How Should Resources Be Allocated? by Dick Timmer and Tim Meijers
Aristotle’s Defense of Slavery by Dan Lowe
Arguments for Capitalism and Socialism by Thomas Metcalf
Defining Capitalism and Socialism by Thomas Metcalf
Consequentialism and Utilitarianism. by Shane Gronholz
Longtermism: How much should we care about the far future? by Dylan Balfour
Ethics and the Expected Consequences of Voting by Thomas Metcalf
Happiness: What is it to be happy? by Kiki Berk
“Nasty, Brutish, and Short”: Hobbes on Life in the State of Nature by Daniel Weltman
The Non-Identity Problem by Duncan Purves
The Repugnant Conclusion by Jonathan Spelman
Translation
PDF Download
Download this essay in PDF.
About the Author
Tom Metcalf is an associate professor at Spring Hill College in Mobile, AL. He received his PhD in philosophy from the University of Colorado, Boulder. He specializes in ethics, metaethics, epistemology, and the philosophy of religion. Tom has two cats whose names are Hesperus and Phosphorus. shc.academia.edu/ThomasMetcalf
Follow 1000-Word Philosophy on Facebook, Twitter, and Instagram and subscribe to receive email notifications of new essays at 1000WordPhilosophy.com.
Discover more from 1000-Word Philosophy: An Introductory Anthology
Subscribe to get the latest posts sent to your email.

3 thoughts on “Artificial Intelligence: Ethics, Society, and the Environment”
Comments are closed.