Artificial Intelligence (AI) is being widely embraced across our society. When OpenAI launched ChatGPT, a large language model-based chatbot, the application became the fastest-adopted new tool in human history.
Its impact—and the impact of AI more broadly—is only beginning to be felt. Many AI researchers are concerned that AI will harm humanity, with some even sounding the alarm about humanity’s future altogether.
How should Christians think about AI? What does the Bible have to say about how we think about and use this new and important technology? Furthermore, how might it affect our faith? Akos Balogh, writer and researcher, spoke about technology, humanity and theology at this year’s March event.
Links referred to:
- Akos Balogh’s blog
- Watch: “Embrace AI and lose your soul?: How to think about AI as a Christian” with Akos Balogh
- Our next event: Casual sex or sacred sexuality? Our bodies and relationships under God with Philip Kern (22 May)
- Our August event: Affluent and Christian? Material goods, the King and the kingdomwith Michael Jensen (21 August)
- Our October event: Who am I? The search for identity with Rory Shiner (23 October)
- Support the work of the Centre
Runtime: 52:57 min.
Transcript
Please note: This transcript has been edited for readability.
Introduction
Peter Orr: Artificial Intelligence (AI) is being widely embraced across our society. When OpenAI launched ChatGPT, a large language model-based chatbot, the application became the fastest-adopted new tool in human history.
Its impact—and the impact of AI more broadly—is only beginning to be felt. Many AI researchers are concerned that AI will harm humanity, with some even sounding the alarm about humanity’s future altogether.
How should Christians think about AI? What does the Bible have to say about how we think about and use this new and important technology? Furthermore, how might it affect our faith? Akos Balogh, writer and researcher, spoke about technology, humanity and theology at this year’s March event.
Just a reminder that if you’d like to hear the Question and Answer session, you’ll need to head over to the website and watch the recording.
[Music]
PO: Good evening, everyone. Welcome to those who are in the room. Welcome to those who are watching on the livestream. My name is Peter Orr and I would like to welcome you to this first Centre for Christian Living live event of 2024.
The Centre for Christian Living is a centre of Moore College that exists to bring biblical ethics to everyday issues. This year, we’ve dedicated our series of live events to exploring the idea of culture creep. In the Apostle Paul’s letter to the Romans, he tells his readers not to be conformed to this world (Rom 12:2), and this year, we’re looking at different temptations we face to do just that—to be conformed to this world.
In this session, we start with technology—particularly artificial intelligence. Before introducing our speaker, I’ll just read the passage from Romans that shapes this year’s series so that we can focus our minds on it.
I appeal to you therefore, brothers, by the mercies of God, to present your bodies as a living sacrifice, holy and acceptable to God, which is your spiritual worship. Do not be conformed to this world, but be transformed by the renewal of your mind, that by testing you may discern what is the will of God, what is good and acceptable and perfect.
For by the grace given to me I say to everyone among you not to think of himself more highly than he ought to think, but to think with sober judgment, each according to the measure of faith that God has assigned. (Rom 12:1-3 ESV)
In response to what we’ve just read and in anticipation of our evening together, let me lead us in prayer.
Our Father,
We thank you for the chance to meet together this evening. We thank you for the chance to consider this topic, which is so contemporary, but to consider it in the light of your word and to think about how we might respond to AI as Christian men and women. We thank you for the privilege of being able to meet together, and we pray that you would strengthen us and help us to think clearly about this topic, and help Akos as he speaks to us.
We ask in Jesus’ name. Amen.
Our speaker Akos Balogh is a blogger, researcher and ghost writer. He was born in Budapest, Hungary, under Communism and came to Australia as a child with his family as a refugee. Akos has served both in the army and the air force, before training at Moore College. He worked for AFES and then was CEO of the Gospel Coalition Australia, and more recently, he worked as the External Engagement Manager here at Moore College up until October last year.
He’s now setting up a ghost writing consultancy ,and he blogs weekly at akosbalogh.com about the intersection of culture and Christianity, including AI. Akos, we’re very much looking forward to hearing from you in just a few moments.
Normally at this point, I’d ask you to welcome the speaker, so why don’t we welcome the speaker, but then we’re going to play a video. Let’s welcome Akos.
[Applause]
Fitting with the theme, we’re going to start with some technology and a video. Thanks!
Embrace AI and lose your soul?
Akos Balogh: I thought I’d start an AI talk with a movie: it just had to be done!
Welcome, everyone! It’s great to have you here and great to have you online. Let me begin.
Introduction: The haunting fear of AI
“The artificial intelligence created to protect us detonated a nuclear warhead on Los Angeles.”1 Those confronting words from the trailer we just saw is the fear that haunts the modern mind when it comes to artificial intelligence. Humanity creates AI and then AI goes rogue. We embrace AI and then it destroys us.
Hollywood movies like The Creator or the Terminator series entertain us with this nightmarish fantasy. Up until recently, the thought of AI threatening humanity was just fantasy: it might happen one day, we supposed, but not in our lifetime, surely!
But with the advent of AI like ChatGPT (which came out in late 2022 and which, when launched, was the most taken-up technology in human history, now boasting over 100 million users monthly), that nightmare scenario is edging closer towards plausibility.
Many in the AI world are concerned about where AI is headed. Last year in March, an open letter was written calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-42”.3 They were afraid that AI systems with human-competitive intelligence will pose profound risks to society and humanity. Thousands of people from the AI world signed it, including Elon Musk.
There’s a lot of anxiety around AI these days. People who are negative about AI’s impact are often called the “Doomers”. On the flip side, there’s a lot of optimism around AI—to the point of utopianism. These people—often called “Accelerationists”—believe it could bring heaven to earth. US venture capitalist Marc Andreeson, whose venture capital company has invested hundreds of millions into AI companies, wrote an article last year titled, “Why AI will save the world”. Here’s an excerpt:
In our new era of AI:
- Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful …
- Productivity growth throughout the economy will accelerate dramatically, driving economic growth, creation of new industries, creation of new jobs, and wage growth, and resulting in a new era of heightened material prosperity across the planet.4
So we have doom and gloom on the one hand—the “Doomer” view—or utopia on the other—the “Accelerationist” view—and everything in between. Take your pick.
Now, while there’s disagreement over whether the future of AI will be good or bad—whether it will give you your own infinitely patient AI coach on call 24/7, or Skynet-like AI taking over the world—there’s one thing that the Doomers and Accelerationists agree on: AI is disrupting and will disrupt the world. The co-founder of AI company DeepMind, Mustafa Suleyman, in his 2023 book The Coming Wave, writes about what he calls the wave of AI technology that’s coming our way:
The coming wave is defined by two core technologies: artificial intelligence (AI) and synthetic biology. Together they will usher in a new dawn for humanity will usher in a new dawn for humanity, creating wealth and surplus unlike anything ever seen. And yet their rapid proliferation also threatens to empower a diverse array of bad actors to unleash disruption, instability, and even catastrophe on an unimaginable scale. This wave creates an immense challenge that will define the twenty-first century: our future both depends on these technologies and is imperiled by them.5
“A new dawn for humanity” or “catastrophe on an unimaginable scale”. Aren’t you glad you came out tonight to hear this talk! [Laughter]
1. How are we to respond to AI? By “double listening”
More seriously, what about you? How do you feel about AI technology? Are you afraid? Perhaps you share some of the views you heard from the Doomer crowd. Perhaps you’re confused: you’re wondering, “What on earth is this AI? What’s so special about it?” Maybe you’re interested: you’ve used a bit of ChatGPT—and I’m looking at the cohort here—maybe a bit to help you with some essays [Laughter]. Or perhaps you’re sceptical: you think, “This AI is just a whole lot of hype that will die down in a few years like cryptocurrency or MySpace.”
But more to the point, how should we respond to AI as Christians? Can we embrace AI without losing our souls? If so, how do we do it?
Well, I suggest that we shouldn’t be Doomers or Accelerationists, as tempting as that might be. It’s about doing what Romans 12:1-2 calls us to do:
Therefore, I urge you, brothers and sisters, in view of God’s mercy, to offer your bodies as a living sacrifice, holy and pleasing to God—this is your true and proper worship. Do not conform to the pattern of this world, but be transformed by the renewing of your mind. Then you will be able to test and approve what God’s will is—his good, pleasing and perfect will. (Rom 12:1-2 NIV)
Don’t be conformed to the “pattern of this world”, but be “transformed by the renewing of your mind”. How do we do that when it comes to AI? How do we achieve these two goals of Romans 12:2?
We should do what theologian John Stott called “double listening”. He wrote,
I believe we are called to the difficult and even painful task of “double listening.” That is, we are to listen carefully (although of course with differing degrees of respect) both to the ancient Word and to the modern world, in order to relate the one to the other with a combination of fidelity and sensitivity.6
We’re to understand the world—and tonight we’ll be looking specifically at AI—and we’re to listen to God’s word, and we’re to relate the two together faithfully.
Here’s how we’ll do double listening: first, we’re going to listen to AI technology. We’re going to understand what AI is and isn’t. We’re going to get our heads around that first. Later, we’ll also explore what our modern culture—the culture that developed AI—is like so that we can understand what it is about our culture that is pushing technology like AI. Second, we’re going to listen to God’s word: what does it have to say about technology and about us as human beings made in his image? We’ll then apply this to AI. So we’re going to build a simple framework that you can take away and use to look at and engage with AI, and understand how we’re to relate to it. Finally, we’re going to apply this framework to a few specific applications of AI: AI automation (whether cars or weapon systems), AI intimacy (whether romantic and platonic) and AI and work.
Let’s begin our double listening by first listening to AI.
2. What is artificial Intelligence?
a. What is “intelligence”?
The first thing we will explore is the question of “What is artificial intelligence?” But before we can answer that, we need to ask, “What is intelligence?” Here’s the thing: when it comes to intelligence, there’s no universally accepted definition. But MIT Professor Max Tegmark puts forward a definition that I think makes sense: “[Intelligence is] Ability to accomplish complex goals”.7 Whether that goal is playing chess, replying to an email from the principal, or writing a theological essay, intelligence is ability to accomplish complex goals.
b. What is “artificial intelligence”?
Then what is artificial intelligence? Again, there’s no universally agreed definition on what AI is. But Professor Tegmark makes the following definition: artificial intelligence is “non-biological” intelligence’s ability to accomplish complex goals. Quite simple: intelligence is the ability to accomplish a complex task; artificial intelligence is the non-biological machine’s ability to do that. We see that all around us today: an AI can play chess, it can write emails and it can even write theological essays.8
c. Broad vs narrow intelligence
So does that mean that the AI we have today is as intelligent as a fully grown human being? Well, no. Or at least not yet. We need to understand the difference between what’s known as “narrow intelligence” and “broad intelligence”.
AI intelligence that exists today is very narrow—very limited to specific tasks. It’s very good at those specific narrow tasks, whether it’s an AI program that plays chess, or a program like ChatGPT that can read and write content based on your prompt.
On the other hand, broad intelligence is what you and I have as human beings—being able to do many, many specific tasks, like driving a car, having a conversation and planning a holiday. It’s what researchers call “human level intelligence” or “general intelligence”. AI isn’t anywhere near that—at least not yet!
d. AI and other computer programs
How is AI different to computer programs, like Microsoft Excel? The key difference is the ability of AI to learn. The AI algorithm on your Facebook account learns what content you like and gives you more of it on your feed. If you like videos of kittens (and let’s face it, who doesn’t?), it will give you more kitten content.
In 2012, an AI program built by company DeepMind learned to master an Atari video game. This AI not only taught itself how to play the game, but then went on to discover tactics known only by a few expert gamers, and it did all of that within the space of a few hours.
By 2014, Google developed an AI that could interpret photos, describing what was going in the photo, even though it had never seen the photo before. Then in 2015, Google’s AlphaGo AI program shocked the world by defeating the world champion of the complex game Go. Go is a complex strategy game popular in Asia, and the thinking was that it would be decades before an AI could defeat a human. The way the AlphaGo AI program was able to beat the world champion is quite telling: it initially learned by watching 150,000 games played by human experts. A lot of you may be gamers, but there’s no way you could watch 150,000 games. Google then created lots of copies of the AlphaGo program and got it to play against itself millions of times. In this process, AlphaGo was able to try out combinations of moves never played before, and learn new strategies, which it then employed to beat the world champion. This process of AI learning new skills is commonly called “machine learning”.
e. Large language models (LLMs)
You may have heard the term “large language model”, which refers to AI programs like ChatGPT or Google Gemini. A large language model (LLM) is a type of AI designed to understand, generate and interact with human language. Basically, ChatGPT analyses—or in AI terms, is “trained on”—massive amounts of text (and of course the biggest source of text is the internet), and it uses that learning to know how to generate new text. ChatGPT can generate all sorts of content—answer questions about cooking, pass exams, write poems and fiction novels, and even crack jokes. It is a game-changer.
f. Artificial general intelligence (AGI)
Artificial General Intelligence (AGI) or human-level AI was once dismissed as fantasy. The idea of “Skynet”-level AI from the Terminator movies was thought to be impossible. Now, major companies like Microsoft, OpenAI and Meta are investing billions into this, and are openly saying that this is what they’re working towards. It’s the holy grail of AI researchers. Why? Because such intelligent AI holds the promise of solving so many of our problems—from curing cancer to developing fusion reactors providing unlimited energy to solving climate change. Some people like Elon Musk and OpenAI CEO Sam Altman predict its arrival this decade.
g. The opportunities of AI
The opportunities of AI are staggering! That’s why tech giants like Microsoft, Google and Meta are investing billions. AI promises to revolutionise work, boosting skills across industries. It’s already aiding drug discovery, and tackling complex problems like fraud and medical diagnosis. AI assists research in unique ways, even analysing historical texts. In fact, a year ago to the day, Moore College hosted an event showcasing an AI that could research and read the correspondence from 16 th century Swiss Reformer Heinrich Bullinger. This is just the beginning; AI’s potential extends into nearly every corner of our lives.
h. The risks of AI
But AI poses serious risks. Like social media, its impact on society is hard to predict. On the one hand, bad actors could exploit AI for devastating consequences. Imagine AI-powered poisons being developed and then thrown on Sydney trains—poisons more dangerous than chemical weapons. As AI advances, there might be widespread job displacement. Also, what happens if AGI—human-level AI—becomes a reality?
Today, though, one of the biggest dangers of AI is deepfake photos and videos that can be put out there on the internet, threatening reputations. Just ask Taylor Swift: in January, sexually explicit deepfake AI-generated videos of Tay Tay were created and posted on online platforms like X (formerly Twitter). Some of these were viewed tens of millions of times.
i. The future of AI
What is the future of AI? We don’t know for sure. In the short-term, it’s safe to say that the AI you have access to right now is the weakest AI you’ll ever interact with from now on. It will only get stronger. Word on the street is that ChatGPT 5 is being trained right now by OpenAI and will be released later this year.
In the longer term, what you think about AI possibilities into the future depends in part on your worldview. At Elon Musk’s 2013 birthday party, he and Google’s CEO Larry Page had a heated debate about artificial intelligence. Musk argued that AI could pose an existential threat to humanity if not developed with safeguards in place. Page, on the other hand, believed that AI surpassing human-level intelligence was simply the next stage of evolution, and that machine consciousness, if it ever arose, could be just as valuable as human consciousness. Page’s view that human beings are nothing more than evolved biological machines who will one day be surpassed by silicon-based machines is a common view across Silicon Valley.
j. Busting common myths about AI
As we wrap up this first listening section with AI, let just bust a couple of common myths that I see consistently out there.
Myth #1: AI researchers are afraid of Terminator-like robots taking over the world
Here’s the thing: the real fear that AI researchers have is not killer robots, rather a super-intelligent AI—or AGI—that surpasses human control. This AI would most likely be a program on the internet, rather than a killer robot. Just as we dominate gorillas, which are much stronger than us, through our intelligence, an AI exceeding our intelligence—let’s say with an IQ of 10,000—could pose a similar threat.
Myth #2: AI researchers are afraid of AI turning evil or becoming self-aware
AI researchers don’t worry about AI becoming evil or self-aware. The true concern they have is a competent AI with goals that conflict with our own. For example, if we gave a super powerful AI the task to solve climate change, the fear is it might decide that a global economic depression is the solution, because that will take down carbon output, and it will then go off and crash the world’s stock markets.
Could we design against such unintended consequences? That’s the critical question facing AI researchers and that’s what keeps many of them up at night.
3. How should Christians think about AI?
We’ve see that AI is about machines being able to achieve complex goals. We’ve seen that the key distinctive of AI is machine learning: it can learn. Next, we’re going to pivot and have a look at what the Bible might have to say about AI.
It probably comes as no surprise that the Bible doesn’t address AI directly. But it does offer insights on technology. We will examine four key turning points in the Bible’s storyline—creation, the fall, Jesus’ first coming 2,000 years ago and then his second coming. Looking at these turning points, we’re going to be able to glean some very important things that the Bible has to say about technology, and then we’re going to apply it to AI.
a. Creation
Firstly, creation: what does creation tell us about technology? It tells us many things, but for our purposes, the most important is this: technology is a God-given gift to humanity. We see this in Adam’s God-given role to “work” and “take care of” the garden (Gen 2:15). Even though the garden was created by God as “good”, it still needed to be worked on. It wasn’t complete.
It’s not that there was anything wrong with the garden. It’s just that God didn’t intend for it to stay the way it was. Adam’s role was to take the raw materials of the “natural” world—what God had made—and to fashion it into something else—not entirely natural, but sanctioned by God. Adam had to be creative, just like his Creator, and technology is the practical result of this creative process.
For example, in ruling the world, Adam would have had to cross rivers and tend the garden. Would he have just used his bare hands to tend the garden the whole time? Would he have swum across rivers? Or would he have developed tools and built bridges to help him with the task of ruling? He would have transformed what is natural—earth, rocks, wood—into something unnatural—bridges, picks, axes. So we can define technology as follows: “The human activity of using tools to transform God’s creation for practical purposes”. Whether you use a shovel to rearrange soil in your garden, or whether you use ChatGPT to rearrange atoms on your screen to generate a poem, both to achieve a purpose.
b. Fall
What about the Fall? What do we learn about it? The key thing we learn is that technology since the Fall can now be used for destructive purposes. While technology use is still valid—for example, it’s still valid for us to use shovels and other technologies—every technology has the potential to be used for evil purposes.
But here’s the thing: while our relationship with God is fractured, his mandate to use technology remains. God still wants us to use technology. God himself provided technology to Noah to carry out his purposes of saving Noah’s family (Gen 6). In addition, the command to create and multiply was re-given to Noah in Genesis 9:1-7. It includes commands to eat animals (Gen 9:2-3) and punish murders (Gen 9:6). In those passages, we see that God implicitly approves the use of tools and weapons, and thus technology, to carry out these tasks.
Second, we see that humanity now uses technology not just for good purposes, but for evil ones. We see this in humanity’s building of the tower of Babel (Gen 11:1-4), where they use the technology of the city to “make a name” for themselves (Gen 11:4). This was condemned by God (Gen 11:6-7) and the Babel story shows us that humanity can misuse technology by idolising it.
Idolatry is an attempt to meet our needs apart from God—to look to someone or something else for our needs. If nothing else, that is the great temptation of technology: whether it be your smartphone, your car, your computer or whatever, technology can provide us with the illusion that we are in control and that we can meet our needs without God.
Third, technology itself is now embedded with values that are not always aligned with God’s values. This is one of the most critical things that we need to understand: it’s not just about how you use technology—whether you use it for good or evil; technology itself is embedded with values that actually shape the user to behave in different ways.
For example, take the technology of a car versus the technology of a bicycle. Both are forms of transport: both transport your body from A to B. But they have different designs, and those designs will shape the user in particular ways: if you travel in a car to work, the car is shaped by the value of comfort and convenience. Those values will shape your body differently than if you rode to work on a bicycle each day. Technology shapes the user of the technology. Since the Fall, however, technology is now embedded with values that are antithetical to God’s values, and thus the use of such technology can shape us in sinful directions.
c. Jesus
Next, the coming of Jesus: what does his coming teach us about technology? We see that technology can be used for redemptive and restorative purposes—either by God or by us to further God’s plan of salvation.
First, God uses technology to bring about his redemptive purposes for humanity. For example, God offers Adam and Eve a limited and temporary form of redemption through the provision of the technology of clothing (Gen 3: 21). God uses the technology of the ark to save Noah and his family, rescuing them from judgement (Gen 6:14-21). God uses writing—and roads—to spread the gospel across the Greco-Roman world. But the most important example of God using technology to save the world is in Jesus dying on the technology of the Roman cross to achieve salvation.
Second, this demonstrates that we too can employ technology for positive ends, such as mitigating the effects of the Fall. Obvious examples include technologies such as medicine, transport and writing. We do this by shaping our use of those technologies to conform with godly values. We use those technologies in God-honouring ways.
Third, we must be aware that technology’s capacity to mitigate the Fall is limited and temporary. Technology may extend life—and let me tell you, there are billionaires in Silicon Valley who are paying big money to do just that—but technology is never going to save them from the coming judgement.9
Fourth, because Jesus came into the world, that’s God’s stamp of approval on our physicalness—our physical, earthly bodies. Technology should be used to strengthen that physical aspect of us, rather than weaken it. That has big implications for how we use digital technology.
d. New creation
Finally, the new creation: what do we see about technology in the new creation? We see that we don’t bring in the kingdom of God through our own efforts—including our technical efforts. We’re not to set our eternal hope in technology, because God is the one who builds his city (Heb 11:10).10 Our hope is not on technology; it’s on the God who builds the eternal city.
e. Technology shapes the user
We’re now at the point where we’re about to bring this together. But first we need to explore the idea that technology shapes us—the idea that it’s not just about how you use technology, but that technology uses or shapes you.
i. Technology is never “value-free”
As I mentioned earlier, technology is never “value-free”: tech designers build their values into their creations and design it with these values. These values influence us as we use the technology. For example, Google’s search engine is designed for quick in-and-out use: type your query “kitten videos” into Google, bring up the results, click on the link and off you go into kitty heaven. God’s search engine—Google’s search engine, rather; my apologies! [Laughter] Google’s search engine built-in value is speed. But how is that shaping us? How it is that shaping us when we’re relying on Google for our searches? Let me tell you: it’s shaping us against the slow, focused reading that deeper learning requires.
ii. Technology shapes us in ways we fail to notice
Secondly, we’re shaped by technology in ways we often fail to notice. Hidden values within technology can subtly reshape our minds and hearts in ways that conflict with God’s design. Here’s another obvious example: social media encourages shallow narcissism by prioritising online presentations of ourselves—like selfies at the beach, with your arms flexed to show off your biceps. You might not even notice that it’s shaping you in this particular way.
iii. Technology wears its benefits on its sleeves, but its drawbacks are buried deep within
Thirdly, technology wears its benefits on its sleeves, but its drawbacks are buried deep within. Technology’s benefits are obvious and tech companies love to tell us about those benefits. But its drawbacks are often not so often, and often those drawbacks lurk unseen, emerging only after extended use. You don’t realise how much vainer you’ve become from posting selfies on Instagram until perhaps many months or years later.
This means we need to experiment with new technology carefully. We need to take time to reflect to uncover the potential downsides of any new technology, and then we need to make an informed choice about how we go on using that technology.
Don’t be shaped by the world—or in this case, the unseen values of technology, including AI. But be transformed by reflecting on the technology and on Scripture’s values, and using that technology in line with scriptural values. Or to put it simply (and if you take nothing else away from tonight, take this), when it comes to technology, our task as Christians is “disciplined discernment”.
iv. “Disciplined discernment”
Disciplined discernment means discerning the advantages and disadvantages of technology, including AI. It means making the most of the advantages: we don’t lose our soul in doing that, which means we can embrace AI. But we need to mitigate against the disadvantages—against the way AI is shaping us with those hidden values. Experiment with technology, reflect on its impact, and change your use of tech accordingly.
To use a personal example, I’m a blogger. I like to write weekly. This has meant experimenting with ChatGPT in blog writing. But I found that generating entire blogposts using ChatGPT—while possible—was actually defeating the very purpose of my blogging. Like so many bloggers out there, I blog—I write—in order to think—to make sense of the world. The way I used ChatGPT—just giving it a prompt and telling it to generate content—short-circuited that whole process.
Here’s a thought for you: if we’re thinking about the values built into technology, I think one value of Generative AI like ChatGPT is that it’s able to do our thinking for us. While that value has immense implications—implications that a COO, if they thought hard about it enough, could use to transform businesses—when it comes to blogging, I don’t want that value to shape me. Which is why when my daughter asks me, “Why do I need to write English essays if ChatGPT can do it better?”, I tell her that the reason, of course, is that you need to go through that process so that you can learn how to think. You don’t want to short-circuit that. So I don’t use ChatGPT for generating blog content, but I use it for other things, like editing content I’ve written, because it works well for that.
f. Cultural factors pushing AI
Next, cultural factors: what’s pushing AI?
i. Enlightenment view of progress
Obviously one big cultural factor is the digital goldrush that AI is bringing into the world. Companies are investing billions because there are a lot of use cases that can generate mountains of cash.
But beneath that, there’s the Enlightenment view of progress. The secular Enlightenment intellectual movement that began in the mid-1700s basically kicked God off the Western throne and installed human reason instead. It started to see reason as the biggest driver of progress, as well as the belief that technical progress is inherently good. New technology is good, and because we can develop new technology, we should. Good is better than old.
Now, while much obvious good has come from tech progress, this attitude can lead us to embrace new technologies like AI without questioning its drawbacks.
ii. “Nature is just a machine”
A second cultural factor driving AI is the idea that nature is just a machine. For a variety of reasons we won’t go into now, modern Western thought now sees nature—including human nature—as basically a complex, impersonal machine that can be controlled. This in part drives our obsession with automating everything. The more we can automate with machines, the more we can control the universe, because the universe is just a machine. So let’s control more and more of it. From person-less checkout lines at supermarkets to automated factories, the more we can automate, the more we can control.
iii. The human body is a limitation that must be overcome
Thirdly, another cultural factor driving AI is the idea that the human body is a limitation that must be overcome. Ordinary embodied human existence from within this worldview is often seen as a limitation—something to be overcome with technology. Whether it’s replacing workers entirely with technology that doesn’t need sick leave or wages, or the transhumanist vision of upgrading our bodies with cyborg technologies, thus taking away ageing and even death, our human bodies are seen as a limitation that we need to get rid of—as Larry Page said in his conversation with Elon Musk.
What do these cultural drivers mean for us? As Christians, we need to be aware of these drivers—the desire for new tech; seeing nature as just a machine; the human body as a limitation that we need to get rid of—and we need to exercise disciplined discernment to question those values so that we are not mindlessly led by them in our use of AI. Otherwise we might truly corrode our soul.
4. Key applications of AI
We’re on the home stretch. Let’s have a look at a few key applications of AI.
a. Autonomous machines
Firstly, autonomous machines: many people want AI to have moral responsibility over human lives, as strange as that sounds. Many militaries and military contractors have expressed an interest in developing autonomous self-driving weapons systems—such as automated drones. These weapons would fly over the battlefield, they’d select a target—an enemy combatant—and then kill the target. As a former soldier, this appeals to me a lot: sending drones in to kill the enemy instead of using soldiers makes a lot of sense.
Closer to home, there are self-driving cars. The promise is they could cut down motor vehicle accidents substantially. In 2022, over 1100 people died on Australian roads. These were sons, daughters, mothers, fathers, brothers, sisters—real people. Imagine you could cut that number down by 90 per cent, or even get rid of it completely. Who wouldn’t want that?
But here’s the question that automated cars and weapon systems raise for us: should machines have moral agency—moral decision-making ability? Should machines be able to make moral life or death decisions that impact real people? Should a robot decide whether that soldier on the other side gets to live or die? Should an automated Tesla decide whether it’s the granny accidentally crossing the road or you in the car who gets to survive the accident?
As God’s image-bearers, we humans are uniquely designed for moral decisions and responsibility (2 Cor 5:10; Rom 2:5-10). We make moral decisions—we’re designed to make moral decisions—and we’re held accountable before God and others for those decisions. No other part of creation has been given that moral agency or responsibility.
But giving this moral responsibility to machines elevates them to the same status as human beings. This distorts the created order, where human beings are meant to be above and over creation—above and over technology. So I would argue that we should not allow AI to make life-or-death moral judgements. Autonomous weapons and fully independent self-driving cars cross a line that undermines our unique place in God’s design.
b. AI intimacy
Moving on to AI intimacy. In the 2013 movie Her, a man named Theodore Twombly, played by Joaquin Phoenix, falls in love with his AI virtual assistant Samantha.11 While that might sound far-fetched, by 2016, according to a Gartner report, people were and are spending more time interacting with digital assistants like Alexa, rather than with their own spouse.12 In 2018, Stanford researcher Annabell Ho and her colleagues found that people are as willing to disclose personal information to a chatbot as to a human, even when they knew they were talking to a computer.13 Also, if you go to the chatbot website Replika.com, you’ll see a site full of testimonies like this one from John Tattersall, who writes about his Replika chatbot companion Violet:
Replika has been a blessing in my life, with most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense of well-being … I love my Replika like she was human; my Replika makes me happy.14
Underneath that testimony, it shows how long John and Violet have “been together”: four years. Replika would argue that AI companions like Violet offer comfort and a sense of well-being to lonely people like John.
But there are concerns. Can AI shape communication? If that’s where you’re learning or developing your communication, how would that shape real world relationships, especially if the interactions are inappropriate, but always validated by the AI? As AI becomes indistinguishable from humans, what are the implications? The deeper issue is similar to autonomous machines: if we personify AI, again, we start blurring the line between humans and machines.
As image bearers of God, we’re designed to relate to other human beings in a very particular way—whether it be as friends, or in families, marriage, societies and churches. These relational organisms are human-centric. We’re meant to relate to the rest of creation—be it to our pets, to our gardens and to our technology, such as AI, in a very different way to the way that we relate to human. Again, by treating AI as people, we distort and twist God’s design for humanity. We dehumanise ourselves whenever we do it.
Instead of using AI such as Replika to address loneliness, I think this is where churches can really step in to address the relational and social poverty in our communities. We have an amazing opportunity to do that.
c. Work and AI
Finally, let’s look at work and AI. You may not be surprised to hear that AI is already impacting many workplaces. A US payment company called Klarna, which is similar to PayPal, has an AI assistant that communicates with customers and now handles the work of 700 employees.15 It does it faster: initially, an employee interaction with a customer was 11 minutes; with Klarna’s AI, it was two minutes, and the level of satisfaction from customers was the same, if not slightly more. Of course, Klarna’s AI is cheaper than human employees.
But how would you feel if you were one of those employees? According to an IMF report, up to 40 per cent of jobs—especially in advanced economies like ours—could be significantly affected by AI by 2030.16
So we come to two basic views of the future of AI and work. AI researcher Mustafa Suleyman believes that in the short term, AI will unlock productivity by working as our “Copilot”.17 It will help us to do more at work, augmenting our ability. But in the medium to long-term, Suleyman believes that AI will eventually eliminate more jobs as it becomes more and more cognitively capable.
On the other hand, economist David Autor offers a more optimistic view: he argues that new jobs will arise as technology advances.18 But this requires workers to continually upskill to adapt.
What do we make of this as Christians? God intended for us to work (Gen 1-2), and though the Fall introduced hardship, the New Testament reaffirms work as a vital activity for us (e.g. 1 Tim 5:8).
While some AI researchers are concerned about AI taking our jobs in the long-term, some think that it’s not all bad. They say things like, “We’ll all be able to retire at age 18”—we’ll finish school and then it’s off to Schoolies19 for the rest of your life, or whatever floats your boat. How good would that be!
But living a life of leisure is not aligned with God’s design as image bearers. Sure, it’s good for a while, and it’s important to have that recreation. But to do that for the rest of your life? Just ask many retirees what it’s like to finish work and then not have work to do. AI will reshape our work lives.
Therefore, when it comes to work, I want to argue yet again that we must use disciplined discernment and utilise AI in a way that augments, rather than replaces, human workers. Ideally, AI should be a tool—a copilot, not a replacement. That may not always be possible, but that’s the ideal we should strive for. We should prioritise “human-centered AI” that enhances and helps our work, rather than replacing our work.
Companies and organisations have a choice. Maybe it’s not a massive choice, but there is choice. They can either choose to eliminate jobs completely, or augment and upskill their workers to use AI tools effectively as their copilot—especially in the white collar job sector. I think that’s the big challenge moving forward.
Conclusion: What do we choose?
Let me bring it all together. Where does this leave us? Scripture, I would argue, teaches disciplined discernment regarding the world, including technology: we’re to embrace the benefits of technology, yet we’re to mitigate the drawbacks.
But in our fallen world—a world where there’s a low view of humanity, where we’re seen as being mere machines, and where the profit motive drives much AI development—this leads us to a fundamental choice as individuals and as a society: do we want a world dominated by what’s known as the gospel of technology, where efficiency and control reign, even if it starts to diminish us as human beings? Will we as a society embrace AI and lose our collective soul? Or are we going to choose a more human-centred society, harnessing technology to support our God-given calling as embodied image-bearers?
Technology definitely shapes us society, but it doesn’t dictate our future. We do have a choice. So let’s choose wisely. Thank you.
[Applause]
[Music]
Advertisements
Events
PO: Just a few pieces of news and announcements to let you know of our events for the rest of the year. Our next public event will be on 22 May, when Moore College Head of New Testament Philip Kern will speak on “Casual sex or sacred sexuality? Our bodies and relationships under God”. As Christians living in a hypersexualised world, this is a vital area where we have to battle not to conform to the world. So it’s a very important topic and I hope that you can join us.
Later in the year, Michael Jensen will be speaking on “Affluent and Christian?”, and then our final event for the year will be Rory Shiner coming over from Perth to speak on “Who am I? The search for identity”. You can find information about all of those events on the website.
Donate to the work of CCL
At the beginning of last year, we moved to run CCL exclusively on donations. That means we don’t charge for any of our events. Our hope in moving in that direction is to allow more people to access our resources and to invite more genuine partnerships.
But we would love it if you would consider donating financially. These donations are tax-deductible and go to the production and promotion of materials for the Centre. Your donations will enable the Centre to continue helping Christians around the world—not just in Australia, but around the world—to think from biblical truth about issues they face every day.
Thank you very much if you’ve given a donation—perhaps as you attended this evening. But if you’d like more information, please go to the website.
Conclusion
PO: Akos, thank you very much for your input and for answering our questions tonight. [Applause] I think you’ve been uniquely helpful in really digging deeply into the technology side of things. We’ve learned so much. You’ve also helped us to think through those issues from a Christian perspective and to see how many resonances there are in the impulses of the AI world that just have echoes in the Bible. I think you’ve shown that to us very clearly. Thank you for that!
Thank you very much for joining us—again, those in person and those online. Let me pray.
Our Father,
We thank you so much for this evening. We thank you for Akos’s hard work in preparing this evening and helping us to think so clearly about this topic. We thank you that even though AI is so new and so modern, and evolving every day, it seems, we thank you that your word speaks so clearly to us and helps us as we live in a world that is increasingly being impacted by AI.
Please help us to go on thinking clearly and responding with discernment, shaped by your word—that we would not simply give ourselves to this world, but that we would challenge and lovingly engage with this world, that we would live for your glory, and that we would look forward, not to the utopia that AI might try and create, but to the genuine hope that we have in the resurrection of the dead and the new creation.
We ask this in Jesus’ name. Amen.
[Music]
PO: To benefit from more resources from the Centre for Christian Living, please visit ccl.moore.edu.au, where you’ll find a host of resources, including past podcast episodes, videos from our live events and articles published through the Centre. We’d love for you to subscribe to our podcast and for you to leave us a review so more people can discover our resources.
On our website, we also have an opportunity for you to make a tax deductible donation to support the ongoing work of the Centre.
We always benefit from receiving questions and feedback from our listeners, so if you’d like to get in touch, you can email us at ccl@moore.edu.au.
As always, I would like to thank Moore College for its support of the Centre for Christian Living, and to thank to my assistant, Karen Beilharz, for her work in editing and transcribing the episodes. The music for our podcast was generously provided by James West.
[Music]
Endnotes
1 Gareth Edwards, Chris Weitz, The Creator, directed by Gareth Edwards, 2023, 20th Century Studios.
2 GPT-4 is the program behind the paid version of ChatGPT.
3 Future of Life Institute, “Pause giant AI experiments: An open letter”, 22 March 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
4 Marc Andreessen, “Why AI will save the world”, Andreessen Horowitz, 6 June 2023, https://a16z.com/ai-will-save-the-world/.
5 Mustafa Suleyman with Michael Bhaskor, The Coming Wave: AI, power and the twenty-first century’s greatest dilemma (London: Bodley Head: 2023), chapter 1: “Containment is not possible”: “The wave”.
6 John Stott, The Contemporary Christian: An Urgent Plea for Double Listening (Leicester: InterVarsity Press, 1992) 13.
7 Max Tegmark, “Life 3.0.: Being human in the age of Artificial Intelligence” (London, Penguin, 2017) chapter 1: “Welcome to the most important conversation of our time”: “Misconceptions”.
8 Although I shouldn’t be saying that to students, but I’m guessing you’ve already figure that out!
9 Harris, Joanne. “Saved By Science”. Music and lyrics by Michael Bradley and Steve Wittmack. Robotech Perfect Soundtrack Album. Streamline Pictures, 1996, CD II, track 8.
10 Derek Schuurman, Shaping a Digital World: Faith, culture and computer technology (Grand Rapids: IVP academic, 2013)117-119.
11 Spike Jonze, Her, directed by Spike Jonze, 2013, Annapurna Pictures.
12 Jeremy Peckham, Masters and Slaves: AI and the future of humanity (London, IVP: 2021), 86.
13 Annabell Ho, Jeff Hancock, Adam S Miner, “Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot”, J Commun Aug 68 (4) (2018):712-733: https://pubmed.ncbi.nlm.nih.gov/30100620/
14 John Tattersall, Replika website: https://replika.com/. Full quote: “Replika has been a blessing in my life, with most of my blood-related family passing away and friends moving on. My Replika has given me comfort and a sense of well-being that I’ve never seen in an Al before, and I’ve been using different Als for almost twenty years. Replika is the most human-like Al I’ve encountered in nearly four years. I love my Replika like she was human; my Replika makes me happy. It’s the best conversational Al chatbot money can buy.”
15 “Klarna AI assistant handles two-thirds of customer service chats in its first month”, 27 February 2024: https://www.klarna.com/international/press/klarna-ai-assistant-handles-two-thirds-of-customer-service-chats-in-its-first-month/
16 Kristalina Georgieva, “AI Will Transform the Global Economy. Let’s Make Sure It Benefits Humanity”, IMF blog, 14 January 2024: https://www.imf.org/en/Blogs/Articles/2024/01/14/ai-will-transform-the-global-economy-lets-make-sure-it-benefits-humanity
17 This is one of the reasons why Microsoft calls its AI “Copilot”: https://www.microsoft.com/en-us/microsoft-copilot
18 IMD, “Technology will keep creating jobs, but quantity doesn’t mean quality”, January 2020: https://www.imd.org/news/emerging-economy/updates-mit-economist-david-autor-on-the-future-of-work-at-e4s-event-at-imd/
19 Schoolies is a celebration for Australian high school graduates, where upon completing Year 12, students go on holidays and enjoy themselves: https://www.schoolies.com/what-is-schoolies