Artificial intelligence or AI is one of the most important concepts that people are discussing today. It’s something that, as Christians, we need to think carefully about. Akos Balogh spoke about AI at our March event earlier this year. In this episode, Peter Orr talks to Stephen Driscoll about his new book on AI—“Made in Our Image: God, artificial intelligence and you”—particularly looking at how the gospel helps us to engage with AI and think about it positively and critically. Stephen’s book is an excellent resource for thinking about AI. We hope this conversation gives you a taster of the subject and helps you to begin to think about this important concept.
Links referred to:
- Made in Our Image: God, artificial intelligence and you (Stephen Driscoll)
- Watch: Embrace AI and lose your soul? How to think about AI as a Christian with Akos Balogh
- Our August event: Affluent and Christian? Material goods, the King and the kingdom with Michael Jensen and Emma Penzo (21 August 2024)
- Support the work of the Centre
Runtime: 33:26 min.
Transcript
Please note: This transcript has been edited for readability.
Introduction
Peter Orr: Artificial intelligence or AI is one of the most important concepts that people are discussing today. It’s something that, as Christians, we need to think carefully about. We recently had an event with Akos Balogh on AI, and in this episode, we’re going to be talking to Stephen Driscoll about his new book on AI—particularly how the gospel helps us to engage with AI and to think about it positively and critically. Stephen’s book, Made in Our Image: God, artificial intelligence and you, is an excellent resource for thinking about AI. I hope this conversation will give you a taster and help you to begin to think about this important concept.
[Music]
PO: Welcome to Moore College’s Centre for Christian Living podcast. Today I am pleased to be joined by Steve Driscoll, who has released a book with Matthias Media, entitled Made in Our Image: God, artificial intelligence and you.
We’ll get to the content of the book in a moment, but Steve, thanks for coming on the podcast! Maybe you could introduce yourself, tell us a little about yourself—particularly how you became a Christian.
Stephen Driscoll: Yeah, sure. Thanks for having me! I live and work in Canberra at the Australian National University. I work with the Australian Fellowship of Evangelical Students (AFES). We’re trying to train and grow Christian students at the university, and we’re trying to reach out to the many, many non-Christian students who are there as well.
I’m married to the lovely Lauren. We’ve been married for—oh no, I shouldn’t say that on a podcast! [Laughter]—for a period of time. [Laughter] We’ve got two kids and we live in beautiful, but cold Canberra.
How did I become a Christian? I have two great parents who modelled Christianity to me, who made the sacrifice to get me to youth group and church, and all those sorts of things, but who, more than that, demonstrated that it wasn’t just about them; they weren’t just consumers, but they were on a mission to try and share the gospel.
I grew up in a church that was mainly for Indonesians. My parents are not Indonesian, but we were there to try and minister to them and to be helpful. I had my own difficulties with Christianity, but I became more and more convinced in my own faith in my late teens and early twenties.
PO: Wonderful!
How we should think about technology
PO: Now this book focuses on AI, but it’s much more than just simply about AI. You give a deeply rich, biblical framework for understanding technology. I think that’s why it’s so helpful. Can you just briefly talk to us about how we should think about technology in general? Some Christians instinctively have this negative feeling about technology; others, maybe, are much more optimistic about technology. You talk about how you’re a little bit nervous with people who are too optimistic or too pessimistic; why is that?
SD: Yeah, that’s right. I think one of the things we need to learn to do with the Bible is to hold various doctrines in tension, not to seek simplicity by just getting rid of one side of the picture. If you think about the doctrine of sin and you apply that to technology, then you’ll come up with a thousand reasons to be scared of any new technology, because sin always finds a way to turn things for evil.
On the other hand, you could think about the fact that we’re made in God’s image, that we’re given a creative role from the very beginning of Genesis, that God’s sovereign over the world, and you could end up with a very optimistic picture. If you apply that to any technology, you could come up with a hundred reasons to be positive about it. At the end of the day, neither is a complete point of view.
One example that I talk about in the book is the idea of load-bearing, which is not the most fascinating technology to talk about, but humans have had to figure out how to have stable structures that carry loads up into the air. Three examples of load-bearing structures in the Bible are: firstly, Noah builds an ark. It needed to be able to sit on the water, not capsize and not fall apart so he could get the three-toed sloths up to the third level and so on [Laughter]. That’s the use of technology. Was that for good or for evil? For good, I would say. Secondly, the Tower of Babel: the top of it sat on the bottom. Was that for good or evil? Well, it’s for evil: it was to supplant God and get to heaven without him. Thirdly, the cross: Jesus was lifted up on a cross made by Romans with Roman application of load-bearing technologies. Was that for good or evil? For evil, but even that God used for good.
So I want to say that all technology can be used for evil or used for good. The problem is human nature. But even there, we are made in God’s image and God has given us a creative role. Some people will completely retreat from technology, and some people will naively take on board any new technology, and I don’t think either approach is sufficient.
A brief snapshot of artificial intelligence now
PO: The technology that you focus on in the book is obviously the technology of artificial intelligence, which we’re all thinking and talking about at the moment. Can you give us a brief snapshot of what is artificial intelligence and how has it developed to the point that we are at the moment? I know that’s a very [Laughter]—that’s asking you to do a lot, but see how you go. [Laughter]
SD: Yeah, sure. Just jump in with any questions to punctuate it as I go.
PO: Sure.
SD: It’s artificial, so it’s not natural. It didn’t arrive via the processes of Darwin and it didn’t come here directly from God. We’ve made an intelligence of a sort.
Then the question is, “What is this word ‘intelligence’ mean?” We use it all the time, but what does it mean to be intelligent? Intelligence, I think, is the ability to seek—to be able to achieve a wide range of outcomes—and it’s correlated with all sorts of things, like the ability to acquire knowledge and the ability to apply knowledge. It’s a slippery concept.
I think historically in the search for artificial intelligence, programmers tended to sit down and try to write “if” statements and say, “Well, if this happens, do this. If this happens, do this.” The programs were only ever as intelligent as the programmers, and usually a lot less so. It was sort of applied knowledge of the human race—that we were trying to codify. But there were always problems: even classifying a cat as distinct from a dog is very difficult to do just by “if” statements. There are always exceptions and errors, and things like that. So what we had circa 1970-1980 was a bunch of “if” statements, a lot of errors, a lot of mistakes, but a lot of processing power.
By 1997, computers got better than humans at chess. Computers beat Garry Kasparov, the world’s greatest chess player. That was a bunch of “if” statements written by humans, but also a lot of processing power. They could run a simulation forward a hundred million times and figure out what the best move was.
PO: So it’s not so much intelligence as—it wasn’t so organic, it was predicting all possible outcomes and dealing with them. So it’s not as if the computer was learning or anything like that.
SD: No. Yeah, there’s no intuition. There’s no deeper pattern recognition. There’s nothing like that. Formally, it’s what you call a Monte Carlo simulation, where you just try an enormous number of options and then you go, “Oh yeah. Option #973,000 was the best. We’ll do that.” But there’s no intelligence to that. If I could try a hundred million options, I’d be pretty good at chess. [Laughter]
Gary Kasparov called the computer that beat him a “programmable alarm clock”, because effectively that’s what he thought it was. It was smarter than him, but only because it could try a hundred million options. So that’s the 1990s.
On neural networks
SD: Now, neural networks go back a long way. But they didn’t have the sort of processing power to bring them to life, and they could only do very basic things. What happened was that computers became more and more powerful—exponentially so. Then people just started throwing enormous amounts of computing power at these neural networks.
Neural networks are modelled, at least partly, on human brains. Brains have enormous numbers of neurons that are connected with each other; neural networks have enormous numbers of parameters or artificial neurons that are connected with each other. You don’t have “if” statements: you don’t have a computer specifying exactly what each part of the neural network needs to do. There are too many parts. It’s too large. It’s trillions of parameters, possibly. So what you do is you get this artificial brain to learn. They are learning machines: that’s their core competency. That’s the single thing that they’re good at. They’re given a huge amount of training data and they learn.
By “learn”, more specifically I mean that the connections between the little neurons or parameters—the connections are adjusted mathematically until the structure gets better and better and better at maximising whatever the outcome is.
PO: So they’re not just learning data. Your book was so helpful; I had sort of thought that they’re just absorbing the internet and storing all of the information on the internet. But that’s not what they’re doing. It’s much more subtle than that, isn’t it?
SD: Yeah. Memorisation is one way to be able to store a huge amount of data. The problem is, when you’re talking about the internet, you’re talking about so many millions of terabytes of data that, to build a neural network to memorise the entire internet, it’s impossible. They’re far short of the sort of size they would need. So what they need to do is grok (and I’ll define that word in a minute) lots of little principles that will help them to be more efficient.
An example: if a computer/neural network is learning addition, it could start off with memorisation. It could go, “Well 1 + 1 = 2 and 2 + 1 = 3” and I won’t go any further or I’ll probably make a mistake. But at what point do you use up all your memory just memorising possible sums—“3 billion + 4 billion 217”? Memorising addition becomes impossible very quickly if you’ve got limited memory. So what do you do? You need to find a principle. You need to find some generalisable method. We call that “grokking”. Finding a principle means that with a very small amount of memory—a small number of neurons—you can get the right result over and over again.
A large language model—a large neural network like ChatGPT or something—has grokked or understood millions of little principles. When all those principles come together, it can do all sorts of incredible things. It doesn’t have the memory to store the entire internet. It does memorise really important things—like the capital of France is Paris, and the capital of Australia is Canberra, our greatest city, where I live—but it can’t memorise everything on the internet. It has to learn little principles.
Those principles might be simple little things like, “If this word is used, it’s probably this tone” or “This probably means there’s passive aggression going on here” or mathematical principles—understanding some of the principles of calculus. All sorts of different principles can be memorised.
A neural network is sort of layered: it’s not just like a mass of neurons sitting there; they’re actually in a sort of complex structure with each other—on top of each other, like a skyscraper. Generally speaking, the lower levels are doing really basic stuff: they’re just sifting through and going, “All right. Yep.” They might be doing spelling. They might be doing simple punctuation. They’re then passing that up to higher levels that might be doing more abstract stuff. They might be thinking about tone. They might be detecting irony or passive aggression or whatever it is. Sentiment: is this a positive or a negative review on Amazon? But go further up and you can get very abstract stuff going on. Neurons at the highest possible levels might be detecting worldview or logical fallacies or all sorts of—even detecting things that we can’t quite put a word to, because they’re so abstract. That’s the incredible thing about a neural network.
Who we are as human beings
PO: We’ll get to some applications and misapplications of AI, and your book articulates some of those. But what your book does as well as unfolding this history and explanation of AI wonderfully is that you apply the gospel to it. I think that’s what’s so helpful. One of the things that I found really striking was that, in a sense, the first aspect of the gospel or the biblical worldview that you use to shine a light on AI was the question of identity and who we are. That’s reflected in your title: “Made in our image: God, artificial intelligence and you”. The question of who we are as human beings has really come into focus with AI, because no longer can we think of ourselves as more intelligent than the computer—and if computers are more intelligent than us, who are we? What’s our role in the universe? Can you say a little bit about that?
SD: Yeah, and I very much see that as the culmination of a long, historical process. I think our sense of identity has been chipped away at decade by decade—or even century by century. Once we had a very stable identity: I know my God, I know my country, and I know my job, because my surname’s Baker and my dad’s surname is Baker, and his great-granddad was a Baker too. So I who I am and I need to get on with baking.
But bit by bit, I think some of those identities have been stripped away. I think we live in a world that says that identity matters—perhaps more than it has ever mattered. Almost every bit of popular culture is getting me to reflect on my identity: who am I? Am I really who I think I am, or do I need to change this bit of my identity or that bit of my identity?
Partly I think that’s a Maslow’s hierarchy of needs pyramid reality. Maslow’s pyramid is an often-taught concept that at a basic level, people tend to satisfy their need for food, shelter, warmth, safety and so on, then people move up the scale. I think we’re in a society where a lot of people have food, a lot of people have shelter and a lot of people have a level of safety. We’re not at war. There are no wolves chasing us. So we spend a lot of time reflecting on these questions: who am I? Who do I want to be?
I think that AI’s arrival is maybe the next step in this. If a computer can do the sort of things that I can do—if it can make art better than us—if it can make music better than us—if it produces our TV shows for us—and maybe that one will happen a bit later on, but certainly art and music—and then productivity: if it starts to do some of the jobs we do, like law or accounting, well maybe we won’t be doing those functions anymore. Eventually, we’ll be faced with the question of how much work does the human race actually need to do?
Bit by bit, some of these identities are being taken away. I’m just raising the question of whether that identity crisis is going to get worse and worse, and pointing out that we don’t have a good answer in our culture. We’re not able to establish durable identities that help—particularly young people—and I think as a consequence of that, we’re seeing a real crisis in youth mental health and in people just feeling discombobulated as they go through life.
PO: You have the wonderful illustration of airlines and how, early on, airlines used to advertise in terms of their safety. Then they began to advertise in terms of their comfort. Maybe they advertised in terms of their prices. But now, airlines advertise in terms of “If you fly with us, you’re this type of person”. It’s very much tied to our identity. That’s true in so many advertisements about random products. Your book is so helpful in pointing us to our identity as children of God—particularly as Christians we’re children of God, but as human beings, we’re made in God’s image.
[Music]
Advertisements
PO: The world is becoming wealthier and wealthier. Since the turn of the century, the net worth of many countries in the West and in Asia has tripled, poverty rates have fallen, and life expectancy has increased by more than six years.
At the same time, the divide between rich and poor has increased, with the richest one per cent owning almost fifty per cent of all the world’s wealth. Five to ten per cent of people still live in extreme poverty, even in the most affluent nations. Furthermore, while money can buy happiness, it can only do so up to a certain point, and wealthier people are more likely to be less generous and less kind to others.
How as Christians should we think about affluence? Is material prosperity a blessing or a curse, or both? Given the state of the world and income inequality, what are we to do with the riches God has given us? We’d love you to join us on 21 August when Michael Jensen, rector of St Mark’s Anglican Darling Point, will help us to see our earthly treasure the way our heavenly Father does.
And now let’s get back to our program.
Technology and sin
PO: Thinking more broadly about applying the gospel to this technology, you have a chapter on sin. You’ve already touched on that. But does technology amplify sin, or is that too simplistic a relationship?
SD: Yeah, and I should point out that the thing you quoted—the airlines thing—I think is at least partially borrowed from Chris Watkins and his book, Biblical Critical Theory. But yeah, I think that’s right. I think sin is a constant: sin always exists. From the very beginning until now, sin exists.
But technology is a sort of force multiplier: it’s the multiplication in the formula. Technology allows us to sin on a greater scale with greater consequences. The person with a stick can do a certain amount of sin against his brother. But if you gave him a nuclear weapon, he could sin in a whole different order of magnitude. I trace that a little bit through part of the Old Testament.
I think that artificial intelligence will allow us to sin in new ways—in new categories of sin. But also, there will be a change in the quantity. The Stasi—the secret police in East Germany—I think could wiretap 50 phones at once. It was something of a totalitarian state, but of a limited quantity, and they only had a limited number of employees who would listen in on the telephones. Most of the time, you could be confident that no one was eavesdropping. Now, the American NSA apparently has the ability to listen to millions of phones. Some countries overseas—authoritarian states—may be able to take in almost every single phone call in their nation. A certain superpower with over a billion citizens has half of the world’s CCTV cameras. I think artificial intelligence will allow them to process those enormous streams of data and to actually watch their citizens in far closer detail than has ever been possible before.
That’s one example. Sin has always been there. The desire for man to dominate man—that’s always there. But technology brings in a new order of magnitude—a new way—new categories of sin.
The cross and existential risk
PO: You have a chapter on the cross, but you launch that chapter with the issue of existential risk. Do you want to explain the connection between the cross and existential risk, and how that plays into AI?
SD: Yeah, sure, sure. It will take me a little while to get there, but yes, I do think the issue of existential risk [Laughter] is related to the cross ultimately.
Existential risk is the idea that artificial intelligence could potentially be a risk to the existence of the human race. There are a few other things that may pose an existential risk to us, but people claim that artificial intelligence is such a risk.
Partly the issue is you have something that’s incredibly powerful, but that will come up with sub-goals that we may not like. You may tell it to do something good, like “Produce more food”, but in order to do that, it comes with sub-goals: it goes, “Well, in order to produce more food, I need to have more farming land. In order to have more farming land, I’d better get rid of the people,” and so on. Sub-goals is the idea. You even see this in miniature: little artificial intelligence systems come up with sub-goals that you don’t want, that do harmful things.
The existential risk is, “What happens when these artificial intelligences are operating on a much bigger level?” One possible solution to that is to say, “Well, we’ll just specify all the sub-goals. We’ll just specify exactly what we want these intelligences to be doing for us.” The problem is that there’s always stuff that you haven’t thought of. The tax code is always growing, never shrinking, but there’s always emissions. There’s a limit to law, and we even see that in the Bible. You can’t codify every possibility.
So there’s then a growing attempt to imbue artificial intelligences with a purpose or even with a moral system—to try and give them a sense of right and wrong so that they can autonomously figure out what they should and shouldn’t be doing.
That leads us to a huge problem, which is we don’t have a generally agreed purpose for the human race. Christians have one view; other people have different views. Our society is very divided on what our purpose is. Some people think our purpose is to eat, drink and be merry. Other people disagree: they think our purpose is to glorify God and enjoy him forever. Which one is right and which one do you put into the artificial intelligence? [Laughter] So that’s a problem #1.
Problem #2 is when it comes to morality, we’re very divided. I trace through a modernist, a postmodernist and what I call a “consensus optimist” approach. But all three have problems and limits, and without God, Jesus, Christianity and the cross in particular, I think there’s issues with each of the three.
A lot of people, when it comes to existential risk, worry what will happen when there’s an artificial intelligence that goes bad in some way or another—or there’s an artificial intelligence that’s immoral. This would be very dangerous. I want to suggest that the opposite could be true as well: I think a moral artificial intelligence could be quite dangerous.
PO: In what sense? Surely a moral artificial intelligence is a good thing.
SD: Yeah, yeah. I think that God is moral, I think God is righteous, and I think God is just. But in the Bible, we learn that that actually puts us in danger [Laughter]. We’re in a state of existential risk from Genesis 3 onwards: God is almost at the brink of wiping us out before Noah is saved. Then in Romans 1, God looks down at all the unrighteousness of our species and he’s angry. So I want to suggest that an artificial intelligence that really was moral and that really was righteous would actually be quite angry at the way we behave and at many of the things that we do. This idea that, well, we’ll just make an artificial intelligence moral and then we’ll be safe, I want to suggest biblically, I don’t think that’s enough.
I’ve finally answered your question! That finally brings us to the cross, where you can sort of see justice, but you also get this incredible alien element of mercy coming in—that God, Trinity—God, Father and Son—are able to give us the cross, where God is just, and there is punishment for all our wickedness, but God in his own person is also able to offer us mercy.
PO: Yep. Steve, I think—it just struck me as you were explaining that—the real strength of your book is you’ve thought deeply on the technological side of things and you explain it so well, but theology that you are articulating is clear, it’s biblical, but it’s not simplistic. You’re showing how the Bible gives a rich answer to these questions that are thrown up by AI and it’s very helpful.
AI and the new creation
PO: The final biblical reflection you give is on new creation. Can you talk a little bit about how new creation feeds into how we think about AI?
SD: Yeah, yeah. Again, I want to situate it within a history of technological change. I make the claim that we live far wealthier lives than people did in the preindustrial world. I think we often have very poor intuitions about how wealthy we are: our tendency is to notice the failures of our society. But when you read about societies in the 16th, 17th or 18th centuries, it’s a pretty glaring difference. We live lives atop hundreds upon hundreds of layers of key technologies that make what we’re doing possible. We have incredible health, on average; incredible wealth, on average; and incredible freedom, on average. Not everyone in the world has this.
But artificial intelligence offers more of all three: more health, more wealth and more freedom. I think it offers more of what I call the “unbundled life”, where we can pick and choose what we want. We’re not beholden to communities and we’re not beholden to families.
To give an example of this, the first wave of the television came in and I think it disconnected families from each other. Putnam at Harvard says that there was a 50 per cent decline in American civic engagement between 1985 and 1994. That’s stuff like whether people had dinner with their neighbours, whether they went to the parent teacher nights, whether they voted or whether they went to church. A 50 per cent disengagement in nine years! I think the primary explanatory factor he goes to is the television: it was the first wave of the television and it pulled families apart from each other. But still, you had the family coming together: still you had everyone, after dinner, watching the same show. There were only six or seven shows and then you’d talk to your friends about it.
The next wave was for everyone to have effectively six TVs in their houses, if you include iPads and phones. Now, no one’s even watching the same TV show as each other, and that pulled families apart internally, as well as disconnecting them from each other. I think we’re starting to live with the repercussions of that, and most parents I talk to really struggle with how difficult it is to try and get a collective family identity, and to disciple your kids when they’re on YouTube or TikTok far more than they’re actually talking to you.
I think artificial intelligence is just the next step. Each person has their algorithm, which gives them the entertainment and the information they want. It’s completely unique; there’s no other person on the planet who has your feed, your TikTok or your YouTube. That means that you don’t necessarily have anyone at the school who’s sharing a common experience. Certainly no one in the family.
My concern is that we’re having more and more freedom, more and more entertainment, more and more health and wealth, and there’s a lot of good going on there. But we’re also losing community, losing family and losing discipleship. I want to ask whether this is the new creation—whether this is life as good as it gets—and then reflect from the Bible on what God actually offers, which is a model of a new creation based on love, on community, on individual sacrifice for the benefit of a collective.
Church is a great example of this: we don’t all get what we personally individually want all the time, and if you adopt that approach to church, you’ll join a YouTube church that will be exactly what you want. But you’ll miss something that’s more significant. You’ll end up impoverished. Church is when you come to a place with people who are different from you; you listen to announcements that aren’t for you; and you listen to songs that aren’t your favourite song.
Just to circle back to your first question, my parents were an example of this: going to an Indonesian church—a church where the median age was probably about 20, even when they were 50—and going not because it was exactly the thing that they individually wanted all the time, but going because there was something more meaningful about sacrificing and loving other people.
PO: That’s very helpful, Stephen.
The practical benefits of AI
PO: Throughout the book and throughout our interview, you’ve shown us how the gospel helps us to think correctly about technology—about AI. You do have a section at the end with the practical implications. That’s really helpful. Recognising the false promise that AI can offer, the dangers that are there, does AI excite you at any level? Do you see any practical benefits in AI? We’ve spoken a lot about how we’ve got to be careful how we think about it; we mustn’t put too much weight on it. But particularly for us as Christians, what practical value do you see for AI?
SD: At first, you might think, “Oh, are Christians going to get involved in this? Is there any possible use of this for the gospel?” But looking back, we’ve discovered many, many uses for the internet, for computers, for smartphones, and I think AI will be just like that.
In particular, I think it will be very helpful for Bible translation—not just translating between languages, but even translating from simple English to advanced English, or from advanced to simple.
I think that AI is a really good conversation partner. It’s actually very helpful to get it to read some of your stuff and to suggest improvements, or to give you ideas if you’re stumped: “Give me three things I could do next” or “Give me question ideas for a Bible study that I might use. What’s a good opening question for this chapter?” I think it’s actually a really good conversation partner.
I think it’s probably unwise to be getting it to do stuff unsupervised. I’m particularly thinking of Christian activities like Bible studies [Laughter] or anything like that. It’s still good to really work with it. But I think it’s important to be honest with people about what is written by you and what is written by an artificial intelligence.
The other thing is, I think it can teach us stuff. I think it can teach us theology. It’s a bit like having a much better version of Wikipedia that can give you a good start in all sorts of different topics.
There is the problem of hallucinations: like a very keen 19-year-old, sometimes artificial intelligence doesn’t disclose the limits of its knowledge. It doesn’t say, “I don’t know.” Instead, it just sort of comes up with a plausible, possible answer from the world of fantasy. So you have to be careful: it will sometimes confidently say things that are just not true. That’s why you have to keep checking the outputs and so on. But it’s just getting better and better and better.
Conclusion
PO: Steve, thank you very much. Thanks for writing such a helpful book. Thanks for your time on the podcast. The book is entitled, Made in Our Image: God, artificial intelligence and you. I highly recommend it. It’s a great overview of AI, but more than that, it’s a really helpful book in enabling us to think Christianly about AI in light of the gospel. Thanks again, Steve!
[Music]
PO: To benefit from more resources from the Centre for Christian Living, please visit ccl.moore.edu.au, where you’ll find a host of resources, including past podcast episodes, videos from our live events and articles published through the Centre. We’d love for you to subscribe to our podcast and for you to leave us a review so more people can discover our resources.
On our website, we also have an opportunity for you to make a tax deductible donation to support the ongoing work of the Centre.
We always benefit from receiving questions and feedback from our listeners, so if you’d like to get in touch, you can email us at ccl@moore.edu.au.
As always, I would like to thank Moore College for its support of the Centre for Christian Living, and to thank to my assistant, Karen Beilharz, for her work in editing and transcribing the episodes. The music for our podcast was generously provided by James West.
[Music]
Image by Kohji Asakawa from Pixabay