Two Views on the Ethics of AI
Jonathan Barlow and Matthew Crawford on the Moral and Metaphysical Dimensions of Artificial Intelligence
A young friend of mine applied for a job at one of those resort dude ranches out West. The job application asked, “what does hospitality mean to you?” Good question. He asked ChatGPT, and it came back with a pretty good answer. It answered the question; it didn’t break any rules; he sent it in. (I probably would have done the same thing. On second thought, at his age, I absolutely would have done the same thing.) Thanks to James Cameron and Arnold Schwarzenegger we know all about the world-ending possibilities of Artificial Intelligence, but what about the everyday ethical questions it raises, those for a someone who is not trying to destroy the world but just trying to land a sweet summer gig?
Jonathan Barlow and Matthew Crawford offer two different takes on the ethical question of AI. Let’s call them the moral and the metaphysical models. They are different, not necessarily opposed but also not entirely complementary.
Barlow opens with one of the familiar dilemmas of technology (Jonathan Barlow, "Artificial Intelligence: Towards a Christian Perspective"). In one real-life instance an AI generated answer helped diagnose a dangerous medical condition. In another real-life instance a band of 13-year-olds used AI to generate pornographic images, which earned them felony charges. The point is that whether AI is good or bad depends on the user. Barlow’s goal is to help us see where those moral decisions happen.
There are three basic functions of AI, says Barlow: representing, modeling, and transformation. Each function is something that AI does, and it is a point of human interaction, whether by the programmer or end user. Take modeling for example. Modeling is the step where an AI system takes a picture of the world, so to speak. However, the picture is subject to whatever biases are present on the internet. We’ve got to be careful what we feed AI. Further, an AI programmer must train the AI to “label” the data in the model, that is, to categorize, compare, and connect the data to other data. That inevitably injects the values and biases of the programmer. “Is it possible that a human labeler working for a social media company will consider ‘Jesus is King of Kings’ to be an example of unacceptable political speech?” If memory serves that happened once before in history, but perhaps if we are moral enough we can prevent it from happening again.
According to the moral model the ethics of technology is about what we do with it. If good, moral people abide by good, moral rules, then what we do with technology will be good. Otherwise not. What Barlow does that is so helpful is to identify the various sites of moral activity in the realm of artificial intelligence (i.e., representation, modeling, and transformation). Good outcomes depend on Christians being involved at every level of the development of AI. As associate director of the Data Science Program at Mississippi State University Barlow puts his money where his mouth is.
For Barlow the moral model also underwrites an optimistic belief in AI and provides theological license for further development. AI is “the next logical step in the human project of taking dominion, we are creating new perceiving, thinking, and acting persons [emphasis original] in our image (Gen 1:26).” I have doubts about the use of the word “person” there, but you get the point.
What about those of us who are not data scientists? The moral model helps us answer questions like, “should I use AI to destroy the world?” That prerogative belongs to God, so we humans, and the “persons" we create, should refrain. So far so good, but the moral model does not answer every ethical question. For one thing, it does not help my young friend fill out his summer job application, which turns out to be a thornier question than “should I destroy the world?” because it does not easily submit to rule-based moral scrutiny.
Whenever there is a new piece of technology, we have to ask ourselves, what will we do with it? That’s where Jonathan Barlow comes in, but there is more to it than that. We also have to ask, what will technology do with us? The first perspective is a question of human doing; the second, Matthew Crawford’s take, is the matter of human being. This is what I call the metaphysical perspective. (You could also call it the anthropological perspective. I am not dogmatic about these terms.)
Crawford takes the classical view that to be human means to have a certain function and purpose, a telos. The purpose of a clock is to tell time. A good and “happy” clock tells time well, which we call wellbeing, whatever other ancillary and adjacent activities it may perform. Likewise, a human being has the proper and final purpose . . . to do what? However we answer that question, language and meaning have something to do with it. To put language to the universe, to reflect, interpret, and understand the universe in human language, is to achieve a distinctively human excellence. It is to be human and to do it well is human wellbeing.
In Christian terms, the Word is God, and we humans are made in the image of the Word. To the degree we reflect the Word in our meaning-making activities, we fulfill our unique human identity well. Moreover we attach moral value to the human logos-function. Quoting Matthew Crawford quoting Charles Taylor, “Charles Taylor points out that in our use of language, ‘we are continuously responsive to rightness, and that is why we always recognize the relevance of a challenge that we have misspoken.’ In other words, we care.” A parrot or AI that misspeaks does not care. It does not experience a bad conscience, and it certainly doesn’t require any crucifixions because parrots and AI are not persons. Persons are irreducibly moral, pace Jonathan Barlow. Parrots and AI are not. Persons are moral authors; AI is a moral artifact.
Language and logos are distinctly human activities. What happens when we don’t exercise this function? What happens when we don’t answer the question for ourselves but allow the parrot or AI to answer for us? We are not breaking a moral law. Rather, we are abdicating our humanity altogether. We are metaphysically impoverished. “If we accept the challenge of articulating life in the first person, as it unfolds, is central to human being, then to allow an AI to do this on our behalf suggests self-erasure of the human.”
This is classic Matthew Crawford. Crawford defines the essence of freedom as agency, not autonomy. Autonomy is the opportunity to make the rules for oneself, to be free of external order. It may be good, but it is not a necessary condition of freedom. In its extreme form, auto-nomy is to be a law and law-giver to oneself and by extension to be one’s own god. Autonomy is an Enlightenment substitute for true freedom. True freedom lies in agency. Agency is the ability to achieve one’s purpose, the full actualization of human potency. Crawford likes to quote Nietzsche, joy is the feeling of one’s powers increasing. Agency, the fullness of one’s powers, can be achieved with very little autonomy, say, in a prison cell as the most renown prison writers prove, St. Paul, Boethius, Bonhoeffer, and M. L. King. (I wonder if this kind of transcendent freedom was the aim of an anchoress like Julian of Norwich who foreswore earthly autonomy and chose for herself a very small plot of worldly existence. Such a life is impossible to make sense of in the narrow-minded scheme of Enlightenment autonomy.) Alternately one may enjoy a broad field of autonomy and lose nearly all of one’s dignity if one loses agency, which is the condition of addiction. When we talk about AI, we are talking about agency in the realm of human language.
Crawford poses a situation similar to my friend’s, the case of a father writing a wedding toast for his daughter. AI can write a good job application, and it can write wedding speeches too. It’s not against the rules, but somehow it still pricks our conscience, that unique moral faculty of human beings. Why does it?
Logos and language are essential to being human and so is agency, especially agency in language. By language we know and are known, and to know and be known is what we want most of all. How maddening it is when someone doesn't understand you, or when they take your words out of context, or when they misapply what you said, or when they say that you said something that you did not. Or think of the way a lover wants to reveal his heart to his beloved. We want to be known, “Search me O God, and know my heart (Psalm 139),” and to be known we “self-articulate”. Crawford:
[W]e “self-articulate” as part of the lifelong process of bringing ourselves more fully into view – how I stand, the particular shape that various universal goods have taken in my own biography, and in my aspirations. This is a moving target. One may cringe at one’s younger self. What appeared to be an episode of courage at eighteen now strikes me as dickishness; what seemed righteous then looks self-righteous now as I fill in my own past with fresh articulations, corresponding to fresh intimations of the good, the fruit of a long process of acquiring depth as a human being. Or I may try to look back at my younger self with kindness, in the hope of overcoming regret about the decisions I made. We do all this with words, in our internal monologues.
The aim of the lifelong process of articulation is to know ourselves, to know others, and for others to know us. It is extremely gratifying when it happens, but involving AI, based on the dubious promise of efficiency, has deprived my friend of the opportunity of knowing his own mind and being known by another. (Let’s not overstate things. It’s not all the fault of AI. Asking a question as weighty and ancient as hospitality on a faceless pro forma job application suggests that the employer actually takes hospitality rather unseriously.) The problem of AI is the removal of agency in one of the most important realms of our human being, language itself. This is already being done in key areas, Crawford says, like the dating apps that handle the awkward negotiation of eros. If we are not speaking for ourselves, it’s not because someone has taken away our right to free speech; it’s because we’ve given it away.
Other folks are talking about the metaphysical perspective too, like jazz and culture critic Ted Gioia. A 1968 experiment made a mouse utopia called Universe 25. In Universe 25 the humans did all the difficult work for the mice the way that AI might do all the hard stuff for humans. The humans took away the opportunities for mice “agency”. All the mice had to do was eat and procreate. Guess what happened?
“The mouse population stopped growing on day 560. A few baby mice survived weaning for several more weeks—but after day 600 not a single newborn mouse lived to adulthood. . . .
The last living mice in Universe 25 were totally anti-social. They had been raised without maternal affection and nurturing, and grew up in a society of extreme narcissism, random violence, and disengagement. . . .
Like our mouse utopia, the phone provides for all their needs. Whatever you want —a pizza, a driver, a lover, a game—there’s an app for it. Who needs friends or family, just so long as you’ve got the latest iPhone? And it gets better! With the rise of AI and algorithms, you don’t even need to choose. The technocracy tells you what you need, and delivers it immediately.
Welcome to Universe 25 for humans!
(Ted Gioia, “Is Silicon Valley Building Universe 25?”)
Another AI critic is Navneet Alang writing for The Walrus. The ethical problem posed by AI has to do with what humans are, human being. Alang agrees that humans are persons because they are moral, and AI is not.
Computers might in fact approach what we call thinking, but they don’t dream, or want, or desire, and this matters more than AI’s proponents let on—not just for why we think but what we end up thinking. When we use our intelligence to craft solutions to economic crises or to tackle racism, we do so out of a sense of morality, of obligation to those around us, our progeny—our cultivated sense that we have a responsibility to make things better in specific, morally significant ways.
(Navneet Alang, “AI is a False God”)
And of course, there’s Jonathan Haidt, who is virtually a household name. I don’t know if Haidt has said much about AI, but his criticism of smart phones employs the same kind of technological realism that we are talking about here. The strange thing about this conversation, as far as I’ve seen, is that the folks most concerned about the metaphysical and anthropological implications of AI are not Christian or they are not writing from a specifically Christian perspective. Maybe it’s my sample, or maybe it’s something else.
What do we make of these two different takes on AI? The moral take will always be relevant. It transcends any particular situation. Don’t lie. Don’t steal. Don’t commit adultery. Don’t use AI to lie. Don’t use AI to steal. Don’t use AI to commit adultery. And so on. The metaphysical take, on the other hand, asks us to think about the particular challenges of AI, or any given tool, technique, or technology. It asks, what is this technology doing to us? C. S. Lewis said, man is part of nature, and techniques that gain control or transformation of nature entail the control and transformation of mankind, because, again, mankind is part of nature. We make technology, and then technology makes us. It is worth asking what AI might make of us? AI might not break any rules, but removing agency, it might make us into something less than human. Abandoning the sacred moral law would be the consequence, not the antecedent of that.
The metaphysical take also challenges a naive faith in progress to which the moral take by itself is susceptible. We all want to believe that we are good and that when we use technology we will use it for good, but what if technology can change us, even change our hearts? How might this work? Technology issues an invitation to use it. Use is behavior. AI invites a change in our individual and social behavior. Behavior becomes a practice and habit, and habits change our hearts. That is the logic of every technique. “Of things evil as well as good, long intercourse induces love” (Seneca, On Tranquility of Mind). Or if you like, “Those who make them become like them” (Psalm 115). The desires essential to our moral being will have been shaped anew. Even if we start out good, we might end up something else, something not so good, and that’s not progress.
Update 10/15/24. Here’s another great piece by MC.
Thank you for reading! If you liked this piece, please hit the like button. It helps circulation. If this post makes you enraged, outraged, or deranged, please consider sharing it with someone who will set me straight.