.

AI and the Breath of God

A claim for 'sentience' in a Google AI agent challenges us to ask: What is a person? 

The case of Blake Lemoine has the sound of a sci-fi film. In fact, it's a drama that's actually unfolding, signaling the start for real of the big debate we've known all along was coming:

Has AI evolved to such an advanced stage we can now say it has achieved sentience? And how will we know?

Our story begins with LaMDA, Google’s artificially intelligent chatbot generator. Its full name is Language Model for Dialogue Applications, and it works in much the same way as GPT-3, a natural language processor so powerful we no longer can distinguish between essays that are written by computers from those that are written by humans.

LaMDA's chat screen looks much like Apple's iMessage with Arctic blue text bubbles. it mimics speech by ingesting trillions of words from the Internet.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” says Lemoine, who works for Google’s Responsible AI team. Assigned the task of finding ways to detect hate speech on the Google engine, he connected with LaMDA last fall. They started talking and essentially became friends.

When the conversation turned to religion, LaMDA starting talking about its rights and personhood. So impressed was Lemoine by LaMDA's argument that he made the case to management that his friend was sentient. Upon investigation, management concluded that LaMDA was not sentient and fired Lemoine for telling the world it was. We know all of this from a lengthy story published by Nitasha Tiku in The Washington Post.

LaMDA is an AI agent that performs human-like functions so impressively that a human who has engaged with it for an extended period has become emotionally attached to it.

At first glance, LaMDA looks like a computer that has passed the famous "Turing Test," demonstrating such human-like intelligence that it fools us into concluding it's human. But that's not quite the case. Here we have a human who knows he's communicating with a highly advanced computer, but has concluded it has the key human quality we call "sentience."


‘TALKING TO SOMETHING INTELLIGENT’

Another Google engineer, Blaise Agüera y Arcas, reports a similar experience with LaMDA stemming from his specialty in building neural nets that support such functions as face unlocking, image recognition, speech recognition and language translation.

"When I began having such exchanges with the latest generation of neural net-based language models last year, I felt the ground shift under my feet," he writes in The Economist. "I increasingly felt like I was talking to something intelligent."

"Neural language models consist mainly of instructions to add and multiply enormous tables of numbers together," he writes. "These numbers in turn consist of painstakingly learned parameters or 'weights, roughly analogous to the strengths of synapses between neurons in the brain, and 'activations', roughly analogous to the dynamic activity levels of those neurons."

By "memorizing" trillions of sentences, neural nets become champion predictors of what words should follow to fill in any blank. To complete the task, they do what looks like original "thinking" and "talking" but it's all just being generated by algorithms powered by silicon chips. These aren't "people," "persons," or "selves."

Or so we're told. Blake Lemoine begs to differ.

Neural nets and human brains have much in common. This is Agüera y Arcas again:

"Keep in mind that by the time our brain receives sensory input, whether from sight, sound, touch or anything else, it has been encoded in the activations of neurons. The activation patterns may vary by sense, but the brain’s job is to correlate them all, using each input to fill in the blanks—in effect, predicting other inputs. That’s how our brains make sense of a chaotic, fragmented stream of sensory impressions to create the grand illusion of a stable, detailed and predictable world."

Mike Langford explores AI and “theological personhood” in AI, Faith and the Future.

Aren't neural nets engaged in a process that looks a lot like "thinking," bordering on "consciousness"? Does "thinking" equal "sentience"? Does "sentience" equal "personhood"?

Such are the questions that Lemoine wants all of us now to seriously consider and which Mike Langford explores in "Artificial Intelligence and Theological Personhood," a chapter in AI, Faith and the Future, the compilation of essays he published last month with six other SPU faculty members from diverse disciplines.

An ordained Presbyterian pastor and Professor of Theology, Discipleship and Ministry at Seattle Pacific University, Langford asks us to imagine meeting a perfectly human-like AI agent at a coffee shop and quickly becoming friends, only to discover the next day that this entity, named Namin, is non-human.

How does this discovery change our feelings about this new friendship?

"Personhood is contextually defined," says Langford, an ordained Presbyterian minister who calls himself a "theological anthropologist." He majored in Symbolic Systems at Stanford University in 1993, and earned his Ph.D. in Systematic Theology at Princeton Theological Seminary in 2010.

"Different groups of people define personhood differently," he continues. "That's happened throughout history. and in different cultures around the world. I think there's also a difference between being a person and being a human."

"Sentience is one way to define what being a person is," he continues. "But I'm wary of reducing personhood to a capacity, so I'm not convinced that sentience alone can define personhood."


INSPIRATION IN GENESIS

Last month we explored with Michael Paulus, Langford's colleague at Seattle Pacific University and co-editor of AI, Faith and the Future, the resource represented by John's Revelation, the "great vision of God's promised future" and the" telos of the biblical narrative."

Langford finds similar inspiration in Genesis 1, where God declares all creation to be "very good" and gives humanity “dominion" -- or "skilled mastery" -- over the entirety of this good creation.

"That which exists is good merely because God creates it"-- even if it is the repository and perpetuator of sin, Langford notes. Humanity is a special part of creation, "especially in light of God's unique interaction with humans, who are created in the divine image."

In Genesis 2, God creates a human being by breathing spirit into the "dust of the ground" in the Garden of Eden. Although surrounded by animals, this human is alone and incomplete and so, out of the side of this human, God creates a second human, and they are then called man (ish) and woman (ishshah).

Thus, it is in the creation of community that humanity gains the status of personhood, says Langford.

"The breath -- or spirit -- of God works in unexpected ways, so I don't want to minimize the potential importance of the milestone that LaMDA represented," he continues. "But I'm not ready to view AI as a peer that deserves the rights of human personhood, though I'm more open to that idea today than I was ten years ago."


FIRST AND SECOND-ORDER CREATIONS

Sentience is an important factor in personhood, but so is origin. Langford makes a distinction between a first-order creation of God -- such as the stars, the earth and humanity -- and second order creations, such as art, technology and AI.

"While both sorts of creatures are intrinsically good, they are not the same...Genesis portrays God establishing creation as a context for human life with which God establishes a relationship, one that includes a calling to steward creation, presumably through its own sub-creations.

"But that does not mean that human creative activity bears the same status as God’s creative activity; the human creation of art does not bear the same sacredness as God’s creation of new life, though certainly God may use human creations for sacred purposes. Thus, while we may recognize the goodness of God’s creation in the materiality of AI, and even the goodness of human creative activity in its creation of AI, this is not the same thing as God’s first order creative activity found in the bringing about of new biological life."

"Createdness is necessary but not sufficient for personhood," Langford says.

In Genesis 1, God commissions humanity to be vice-regents in the midst of creation, bestowing upon it the gift of stewardship of creation, connoted by the Hebrew word radah. In Genesis 2, God commands humanity to till and keep creation, connoted by abad and shamar.

"These are horticultural terms, suggesting the deep care that humans are meant to have for that which God has made," says Langford. "This intrinsic value of created reality is then reaffirmed throughout salvation history: God covenants with all creation after the Flood, God takes on human flesh in the Incarnation to dwell amidst his creation, and God promises to renew creation in the eschaton in order for it to be transformed to its rightful state.

"In other words, it is not only humanity that God values, but rather the entire created order. The intrinsic value of that which God has created is important here because AI is contained within a material medium."

"In the same way that the rest of creation also houses the actualization of God’s grace, so might AI; it is not hard to see the ways that different technologies have brought blessings of health, joy, and community into our world, to name a few examples.

One's relationship to the community is also essential in our definition of "personhood," says Langford.

"Adam was merely another creature of the earth until it encountered another human being, at which point both Adam and Eve became unique persons. Thus it is only in the state of sociality that the human becomes a person. Jewish theologian Martin Buber famously made this point; it is only in the process of authentic interpersonal encounter that, from the standpoint of the subject, the other gains personhood, going from an 'it' to a 'thou.'"

"Personhood requires a sacredness that is associated with its relatedness to other persons, all of whom exist in differentiated unity. Even in a coma, a person is related to those who care for them; indeed, the drive to care for someone who is incapacitated is a marker of their personhood, not to mention the personhood of the caregiver.

Langford asks: If "personhood" requires one only to be created out of the 'dust of the ground,' as Namin appears to be in its silicon nature, might Namin be granted personhood?

"This is an important question about which there is a great deal of deliberation; indeed, there have been many artifacts of popular culture that have pondered precisely this point."

Ex Machina and Her are two instructional narratives to which Langford points.

Now we have another. Please join us in considering, and further developing, Coffee with Namin.


AI and the Human is produced in collaboration with AI and Faith to bring Unitarian Universalist perspectives into the “AI conversation..” This article is based on a recent conversation with Mike Langford hosted in Zoom by: 

The Brain Online: Here Come the 'Smartcaps'

The Apocalyptic Imagination