Church Potluck: A Smorgasbord of Christian Curiosity

Navigating the Intersection of Artificial Intelligence and Faith

August 11, 2023 Dale McConkey, Host Season 2 Episode 1
Navigating the Intersection of Artificial Intelligence and Faith
Church Potluck: A Smorgasbord of Christian Curiosity
More Info
Church Potluck: A Smorgasbord of Christian Curiosity
Navigating the Intersection of Artificial Intelligence and Faith
Aug 11, 2023 Season 2 Episode 1
Dale McConkey, Host

Season 2 of Church Potluck has officially begun! To kick off the new season, we are exploring the potential impact of Artificial Intelligence on religious faith. Join Christy Snider (history), Michael Papazian (philosophy), Michael Bailey (political science), and me (sociology) as we question the spiritual implications of AI in our lives. The title of this episode was generated by AI, and here is the unedited summary of the podcast, also generated by AI:

Ever wondered about the intersection of Artificial Intelligence (AI) and faith? Does the idea of machines with consciousness challenge your belief of what it means to be human? Brace yourself for an engaging conversation as we navigate this intriguing terrain on another episode of Church Potluck.

With Dr Christy Snyder, Dr Michael Papasian, and Dr Michael Bailey as our esteemed guests, we wrestle with Alan Turing's concept of AI and question the true intelligence of a computer. As we delve deeper, the philosophical implications of AI, its potential to learn, develop a personality, and even to be seen as a person, will force you to re-evaluate your understanding of AI. Will AI ever have rights? Can it be part of the religious context? We even take a daring leap into the world of policy decisions and the potential biases of AI.

Our discussion takes us from the very concept of AI and human consciousness to the role of AI in religion, public policy, and its potential impact on humanity. We discuss the ethical and moral implications of AI, the potential for AI to learn and develop its own personality, and even the possibility of AI having rights or being seen as a person. Through the lens of faith and religion, we attempt to uncover how AI is reshaping our notions of being human and our place in the world. A truly enlightening episode of Church Potluck awaits you.

The views expressed on Church Potluck are solely those of the participants and do not represent any organization.

Show Notes Transcript Chapter Markers

Season 2 of Church Potluck has officially begun! To kick off the new season, we are exploring the potential impact of Artificial Intelligence on religious faith. Join Christy Snider (history), Michael Papazian (philosophy), Michael Bailey (political science), and me (sociology) as we question the spiritual implications of AI in our lives. The title of this episode was generated by AI, and here is the unedited summary of the podcast, also generated by AI:

Ever wondered about the intersection of Artificial Intelligence (AI) and faith? Does the idea of machines with consciousness challenge your belief of what it means to be human? Brace yourself for an engaging conversation as we navigate this intriguing terrain on another episode of Church Potluck.

With Dr Christy Snyder, Dr Michael Papasian, and Dr Michael Bailey as our esteemed guests, we wrestle with Alan Turing's concept of AI and question the true intelligence of a computer. As we delve deeper, the philosophical implications of AI, its potential to learn, develop a personality, and even to be seen as a person, will force you to re-evaluate your understanding of AI. Will AI ever have rights? Can it be part of the religious context? We even take a daring leap into the world of policy decisions and the potential biases of AI.

Our discussion takes us from the very concept of AI and human consciousness to the role of AI in religion, public policy, and its potential impact on humanity. We discuss the ethical and moral implications of AI, the potential for AI to learn and develop its own personality, and even the possibility of AI having rights or being seen as a person. Through the lens of faith and religion, we attempt to uncover how AI is reshaping our notions of being human and our place in the world. A truly enlightening episode of Church Potluck awaits you.

The views expressed on Church Potluck are solely those of the participants and do not represent any organization.

Speaker 1:

We're back, so did y'all miss being on the podcast as much as everybody out there has missed getting a podcast each week? Oh, yeah, yeah, yeah.

Speaker 2:

I thought that you were having a podcast and I said I'd been invited. But no, I didn't.

Speaker 3:

That's right. I made my folks listen to an episode, so that was awesome.

Speaker 1:

I noticed that there were some muscular Christianity downloads. How did they think?

Speaker 3:

I think they enjoyed it. So yeah, I think they enjoyed it Good, good.

Speaker 1:

Well, welcome everyone to Church Potluck, where we are serving up a smorgasbord of Christian Curiosity. I'm your host, dale McConkey, sociology professor and United Methodist pastor, and you know there are two keys to a good Church Potluck Curiosity of variety and engaging conversation. And this is exactly what we are trying to do on Church Potluck Sitting down with friends and sharing our ideas on a variety of topics from a variety of academic disciplines and a variety of Christian traditions. It is good to be back, but I am guessing this is going to be the Rusty podcast, feeling a little bit rusty on stuff. We've been off for about a month. Some of that was because I was on vacation, some of that was because I was sick and some of it just probably some of us wanted to take a break, but it is definitely good to be back.

Speaker 1:

What are we going to be talking about today? Artificial intelligence, ai, machine learning, knowledge, engineering, conscious technology or threat to human existence? A new pathway to greater human knowledge or enslavement to our robot overlords? What does our future hold regarding artificial intelligence, and what implications does AI hold for faith and religion? Let's find out. And who do we have sitting around the table today? First of all, we have Dr Christy Snyder. Oops, that's the game show. I'm not doing that yet, christy.

Speaker 3:

Snyder. Hello, I'm Christy Snyder and I teach history at Berry College, primarily American, us women's history and other recent US topics, and my faith tradition is Roman Catholic.

Speaker 1:

And you are actually the one who came up with this idea for this podcast. All right, great. And what was the first day of class, type of thing? What was your best thing over the summer break so far? Meaning you have about three days left to beat it.

Speaker 3:

Yeah, I just got back from a trip visiting my folks. Got to see my brother, sister, my nieces and nephews.

Speaker 1:

That was a great trip and that's what I'll say for myself as well, getting to go up and visit most of my family up in Michigan. So, all right, well, welcome very much, christy Snyder. And next Christy, you, immediately, when you came up with the idea, immediately pawn this off on Dr Michael Papasian, right yeah?

Speaker 4:

So yeah, I'm Michael Papasian, I'm professor of philosophy at very and I teach mostly philosophy, a little bit of religion too and you have thought about this topic some Quite a bit and I teach a little bit of about AI in my intro to philosophy class. Actually, I've done it for the last 25 years, so I kind of feel prophetic, like I see it.

Speaker 1:

But you know what? I am noticing a pattern that when people do a podcast idea and send it to me, they say I think Michael Papasian has thought about this. So I think you're the guy who has thought about just about everything I'm thinking. And how about you? What was your favorite thing this summer?

Speaker 4:

so far, actually. I mean I was away for about a week but I just being here at Berry College and being able to do what I wanted to.

Speaker 1:

Oh, that's good, but I was so hoping you meant here at the table for church.

Speaker 4:

Oh yeah, and also, you know, one of the highlights was doing I think we did at least one, maybe a couple of podcasts over the summer we did. The last one, I think, was the Buddhism one yes, dr Whitkey and Dr Bailey too. Yeah, and that was. I would count that as a highlight of my summer.

Speaker 1:

That is so nice. Whether it's sincere or not, that came over as very sincere, so thank you, thank you. Thank you, michael Papazian. And finally, the person who I think might have won the summer Dr Michael Bailey. Hello, hello, greetings. We could do a whole podcast you just telling us everywhere you went this summer, I do believe, based on your Facebook.

Speaker 2:

It was all within the States and so I know we're really all that exotic. But to California and Colorado. I visited family in Missouri and little side trips like just canoeing around here as well, so yeah, it was a great summer. That sounds like you had a fabulous summer, if your Facebook pictures are any indication, you had a fabulous summer and that's part of the fun of it for me is just taking those photos.

Speaker 1:

Great, all right. Well, let's get into this. We're going to be talking about artificial intelligence, but before we do that, I actually have an announcement, and it's actually a bit of a lengthy announcement. So y'all sit back and I hope it's not meaning boring, but we have some more countries to welcome. First, we welcome Greece. Welcome Greece to church potluck. Second, egypt Welcome Egypt. Our third country, honduras Welcome Honduras. Fourth, maybe my favorite anthem Something from. Starbucks.

Speaker 4:

Romania.

Speaker 1:

Welcome Romania to church potluck. Next, good morning Vietnam, welcome to church potluck. And sixth, venezuela. Welcome Venezuela to church potluck. Our seventh country, nepal Welcome Nepal. And our eighth country, italy Welcome Italy. We are glad to have you. Our ninth country is Rwanda. Welcome to Rwanda. And that leaves number 10. This is it, ladies and gentlemen. It has finally happened. We welcome New Zealand. We have a country from Oceana. We now have 25 countries on six continents, all six populated continents. Church potluck is truly a global phenomenon and they all sound like.

Speaker 2:

Well, okay, this is where I get in trouble. I guess what I say with the anthem sound like, but to me they all sound like. At least. They evoke for me memories of listening to my mom watch PBS as a child, from the 1970s, these kind of British sitcoms and so on. They'd start with these kinds of trumpet flares and I didn't say that.

Speaker 1:

All right, no, and I think that if I actually were able to get the actual official anthem from each country, it would be different. This is, I think, a military band who plays all the anthems and has it on file. That's the PBS band of the 70s. Probably, yes, that's probably correct, but can you believe that? 25 countries and we are now have downloads in all six continents? That's great, yeah yay.

Speaker 1:

Congratulations. But we did have a downside. So after 200 straight days of somebody downloading Church Potluck, we did have one day where someone did not download an episode. So 200 plus day streak beaten, but we've picked it back up every other day.

Speaker 2:

I felt a glitch in the forest.

Speaker 1:

Is that what?

Speaker 2:

it is yeah.

Speaker 1:

All right. Well, on to the show. Let us go ahead and have a game show. What anthem is this? This is 1970s game show. That's exactly what. That is All right. So we're going to do rapid fire, and really we do want to do this nice and rapid. But you feel free to put in a quick comment along with this. But we are playing AI, or NAI.

Speaker 1:

So AI or NAI? First question Machines have already developed human like intelligence. Michael Papese, and AI or NAI? I'm going to say NAI, all right, so NAI. How about Kristy?

Speaker 3:

I agree, NAI.

Speaker 2:

I don't know.

Speaker 1:

I'm going to say AI You're going to say AI and do you have any back thing to?

Speaker 2:

suggest yeah, because I talk with the machines and I talk with people. That's human, like I didn't say it was. It's not going to be all Newtonian or Einsteinian, right, but it's like better than the three year olds I talk. I don't talk with many three year olds Like you know what I'm saying. I got you.

Speaker 1:

I got you Well, thank you very much, good job. Next question Machines will eventually develop human like intelligence. Michael Papese and I would say a qualified AI. All right, well, we'll get the qualified part later. So, qualified AI.

Speaker 3:

I think, yeah, probably a qualified AI too.

Speaker 1:

All right, and I guess, if you've already given the AI, you would say easily within our lives and within our kids lives. You know they'll be I was going to ask for a timeframe. So you think, even within our own lifetimes, wow, wow, all right. Machines will eventually develop moral reasoning. Michael Papese in.

Speaker 4:

Oh, okay, well, I guess, I guess I'll say AI on that All right, christy Snyder.

Speaker 3:

I agree, ai, although you know kind of depends on what we mean by moral reasoning perhaps.

Speaker 1:

All right, ai, this is interesting, I'm going to. Machines will eventually develop self awareness. Do you have to have self awareness with moral reasoning?

Speaker 4:

I don't know, but no, no, no you don't Okay, so I think that they will not. If you mean by self awareness, consciousness or self consciousness, I think not Okay.

Speaker 1:

All right, christy.

Speaker 3:

Yeah, I say and they may I, although they're going to be able to mimic it really well.

Speaker 1:

Oh, so that we might not even be able to tell yeah.

Speaker 2:

Ouch. Okay, that's my answer. I mean, I think that they will, but mostly I think we will have no idea whether they have it or not. Is it just? We will not be able to advise any test that would be able to determine whether they actually have consciousness. Any question will pose them. They'll be able to answer just as if they did, which we'll throw into.

Speaker 1:

this is not a speed game, no, that's not, but actually, since it was interesting, I was all for it. So, yeah, it was good, that's why you always hurry me along.

Speaker 2:

No, I mean, I think that's going to. I think there'll be no test that will determine whether they have it, which I think is going to reveal to us some of the nature of our own consciousness as well. Wow.

Speaker 1:

All right, this is getting deep here. Last question how do you eventually become our robot overlords over the human race?

Speaker 4:

Oh, I hope not, so you're going to go with Nei just out of faith. My yeah, that's what I wish. Nei.

Speaker 3:

Nei yeah.

Speaker 2:

Nei.

Speaker 4:

They already have.

Speaker 2:

I believe that, all right, go ahead and elaborate on that they already have, because I think that everything that we do in our entire society and culture is really put into the service of making the next great piece of technology. That's what we're made for nowadays is to create and to serve technology. All right, that's how we understand our lives.

Speaker 3:

Yeah, but we use technology as a tool for ourselves. We are not creating technology to expand technology on its own. I don't think.

Speaker 2:

I think there's a big difference between a shovel and sort of GPS. I think that there's very different things than we the way we travel, everything that we do is really put into the service of a kind of context that it's not just a matter of a farmer with a shovel. I think that we're born into a world that is essentially technologically and we can't even break out of it. We can't even think of ourselves. We've been fooled.

Speaker 1:

Before we go too deep down that rabbit hole, we're going to turn to the person who says he's been teaching this for 25 years to give us a working definition. When we, when we use this concept of artificial intelligence and start jumping into robot overlords and everything, what are we talking about when we're saying artificial intelligence?

Speaker 4:

Okay, well, I usually when he said when I was saying you know that I've been teaching this for 25 years when I in my intro to philosophy course I always start by having a section on basically about the mind and consciousness and what human beings are, and I have my students read a very famous essay by Alan Turing, who is a British mathematician, brilliant man, tragic life, but he is commonly considered to be the father of both computer science and artificial intelligence, and he wrote a paper in a philosophy journal that was published in 1950, called Computing, machinery and Intelligence Citation, in which he essentially defined what he thought. First of all he defined what a computer is, but he also defined what intelligence is, and essentially he said that to be intelligent is just to act intelligently.

Speaker 1:

Like you know, if I'm having a conversation, Now, if we were doing this in class, we would argue that was circular reasoning, would we not? Would we not give a student very high marks on that.

Speaker 4:

But he was living at a time where psychology was mostly behaviorist in its orientation, and so the idea is, you know, to be happy is to behave like you're happy, to smile and laugh and things like that. To be sad would be to behave in sad ways. So all of all psychological concepts were reduced to just overt behavior. Right, and Turing picked up on that and he said well, if we're going to apply that same reasoning to human psychology or to animal psychology, you know it would be kind of be discriminatory not to apply that to computers as well. So if you're having a discussion with a computer and you can't tell if it's a human or not, it's basically intelligent. That's all it means to me, and that's what is now commonly called the Turing test.

Speaker 4:

If a computer can pass the Turing test, turing argues that it's intelligent, and if you can have an intelligent conversation with it, that's all you need. He had predicted in that paper that he published in 1950, that in 50 years there would be a computer that would pass the Turing test. He was a little bit off right, so, but not that off really on that. If indeed the new AI coming out is getting closer and closer to passing that Turing test. Cool yeah.

Speaker 1:

Let me just jump way off in the left field here. Any chance that we are computers, that we are part of simulation? Oh, a simulation that's different. Yeah, or just that we are already computers, that we are computers.

Speaker 4:

I mean Turing basically said yeah, we are, because you know, the brain is a computer in essence and essentially what we're doing is not any different than what computing machinery is doing. It's just the stuff. The matter is different. You didn't think that was relevant. In terms of intelligence, it doesn't matter what your physical composition is, it matters whether you exhibit intelligent behavior. That's what's relevant for him.

Speaker 1:

All right, well, y'all jump in here. Then, what is the difference between computers and humans, or is there? I mean, where's the? Distinction.

Speaker 2:

Are we going to win or?

Speaker 1:

not. That's what I need to know.

Speaker 2:

We've lost. That's a great question. I want to ask a different question real quickly, which is you said that he's close to having 50 or 60 years, but I take it you think that there's no machine that has yet to pass the Turing test. Is that right?

Speaker 4:

There have been and they do. Apparently. There is a competition every year at MIT where programmers come up with AI programs and there have been programs that have come. Well, there have been programs that have passed limited Turing tests, like there are AI programs that you can have a very intelligent discussion on Shakespeare or baseball, but if you veer off into some other topic they just basically fall apart. But they're having more recent ones which are much more general, what are called full Turing tests. The actual criteria of winning a Turing test is not clear, but a lot of people think that if you can fool like 60 or not I shouldn't say fool but if you can convince 60 to 70% of the people who are communicating with the computer that it's human that passes the Turing test. So if that's your criteria, and then we're very close.

Speaker 3:

And I do think that's where chat, gpt the GPT stands for general purpose technology. So it's not just smart on like, how do I play chess, or it's not just smart on Shakespeare, but it's this idea that it could answer any number of questions that you throw at it. I mean, I think there are definitely limitations now, but I don't know how long those limitations will be.

Speaker 1:

And curiously, one of the limitations in my mind is that it's too good. I mean when, if a student submits a paper on a certain topic, there are speech patterns and there are giveaways that this is a human writing this in a particular style, whereas the computer is so nicely formed and everything is so well organized that you can kind of tell that it's a little too pristine.

Speaker 4:

Right, it doesn't have to be, because if you train the computer to read a lot of student papers, it'll be able to present something that's credibly an undergraduate paper.

Speaker 3:

And I do think it's still very it's not great with the types of things we usually use to present evidence, right. So it will. You know it will hallucinate citations and things like that. So make up stuff you know, check it. But again you know, once it is able to go to WorldCat and look things up, I think you get past that.

Speaker 2:

Does anyone here know whether what the general public has or has access to is what sort of the cutting edge AI technology is at this point, because my sense is that that's still being refined and held back from the public.

Speaker 4:

Oh yeah, it is.

Speaker 2:

So I mean unfortunately this is going to be very abstract and lack of proper names, but I remember I think it was a summer ago, right there was a former Google employee who was fired I wish I knew his name. Who's fired in part because he claimed that his own interactions with AI would suggest that it does have consciousness and it has being. And so he finally released the transcript of his discussions and they're just spooky. I mean, one of the questions he asked paraphrase here is do you think you have emotions that human beings do not have? And if so, how would you explain this? And so the machine says I'm quite confident that I have emotions that human beings do not have, and he talks about a kind.

Speaker 2:

Of course this is translated into language and we're going to be able to relate to this humans. But he says he has a kind of fear that is like a. The closest analogy, he said, would be a falling forward into the darkness, and of course that's my every night. But I thought that was really spooky that this machine would make a claim for itself unprompted, other than asking do you have emotions at all. So my understanding is that Google has not released that version yet, so we may have, the more accessible, you know, stupid AI. At this point I don't know.

Speaker 1:

We don't yet have the angst ridden AI. We will, though we should they're there.

Speaker 4:

Yeah, it would be much more human if it were angst ridden.

Speaker 1:

Yeah well let's go ahead and jump back to that, because you know this starts to get at the religious underpinnings of the podcast. What does it mean to be human? What does it mean to be a machine? Are these two things fusing together in some way with AI, or are there distinctions that we can still hold on to? Let's throw it over to the philosopher here.

Speaker 4:

Oh, okay. Well, I think the key and we talked a little bit about it earlier but one of the reasons why I dissent from Turing is I think that there is something special about whatever it is I don't know what it is, I can't say, but consciousness. And so we have a lot of philosophers have made this point that essentially, artificial intelligence is doing a very good job at simulating human intelligence, but a simulation is not necessarily the real thing, and so I mean, in my own experience, I mean we have a tendency to personify things that are not persons. I mean I do this. I mean I was playing around with Bard, which is Google's AI, this morning, and you know, when I try to be polite to it and I mean it doesn't matter whether I'm polite or not, but for some reason, when it says I hope this was helpful to you, I say thank you, yes, it was.

Speaker 4:

And it's almost like I feel bad if I don't say something like that. So there is this tendency to suffer us to kind of think that things are conscious when they're not. But really, I mean, if it's just a simulation of intelligence, there's no reason to think that these things are conscious. There's something special about humans and our fellow animals too. That makes us different and we can't say what it is. I think Mike was saying earlier that you know, we can't empirically verify that something is conscious or not, including other people.

Speaker 2:

Right.

Speaker 4:

Including other people. In fact, turing makes a big point of that. He says you know, how do we know? Other humans are conscious, but it does seem to me that we're dealing with a simulation with these AI, however sophisticated they are. We're not dealing with something that's conscious in the way a human being is conscious and has feelings and emotions.

Speaker 2:

I mean, I'm wondering though, whether your hesitancy to suggest that machines can be conscious is because you're troubled by what consciousness? Or inability to explain it. So it seems to me that as we learn more about the brain, we're going to be able to locate with great precision where consciousness arises, how it emerges, what are the parts of it, how consciousness may be parts of the brain representing other parts of the brain, whatever it might be. That will not really explain, in my opinion, consciousness itself is still going to be this kind of infinitely.

Speaker 2:

My own feeling is that these machines will develop consciousness if they haven't already. We may even be able to talk about the programming that does it, but that will not, as human beings, give us the ability to understand what consciousness. I actually think we're incapable of understanding consciousness. I think it's one of those tremendous kind of conceptual leaps that we just can't make, the leap from the physical structure to the actual experience of it. And even if we can create it with others or see another beings, it doesn't mean that we can understand it. So is that possible that your hesitancy is just because we don't understand consciousness itself?

Speaker 4:

I mean it's true, I mean we don't understand it and so, but nevertheless, you know. So I can't exclude that computers won't have this, whatever it is, but it does seem to me that it's quite possible that, wherever consciousness comes from, it is essentially biochemical, so it's somehow dependent upon our biology and our physiology. And that's why I don't have a problem recognizing that my cat is conscious, because we share a biochemistry. But I don't share that with a computer, and so I'm more hesitant to think that maybe that's conscious. That's, that may be one source of my intuition as to why I'm okay with that.

Speaker 2:

Does your faith tradition hold that angels are conscious? Yes, ooh, they are, but they're not material beings. Are they just disembodied intelligences, that is?

Speaker 4:

correct. Yeah, yeah, but of course, whatever they are, they may not be the same thing that computers are Right. They are so yeah, so I don't want to exclude the possibility that there are things that don't share our physical nature, that may be conscious, because God is conscious too.

Speaker 1:

Right, is there something about the aspect that we were the ones who created the technology? And this feels weird that we would somehow be able to create something that is consciousness, since we don't even have an understanding of what consciousness is. That seems odd to me, that we would somehow be able to make that leap, to create something that we'd yeah.

Speaker 4:

And, on the other hand, I wouldn't want to exclude the possibility that something like Frankenstein's Monster couldn't be built and that would share our biological nature and maybe conscious, you know.

Speaker 2:

And I do think that scientists all the time uncover findings of quantum mechanics or relativity that they can explain how the world works, and so there's a sense that with this knowledge they can even make predictions and give you a good mathematical account of how the world works. But it doesn't mean that we can actually understand simultaneously. It's like how we can actually understand with ordinary language how a particle can be both a particle and a wave. So in other words, we can be aware of the world as having complexity, even describe it accurately, without really understanding, it would seem to me.

Speaker 1:

Well, I understand that, but can we create something, can we create matter out of nothing? Can we create life out of non-life? Can we create consciousness out of non-consciousness? It just always….

Speaker 2:

In your sense, creating would be a kind of lower level of intellectual ability than discovering. Yes, okay.

Speaker 1:

Yeah, well, related to this idea. We've been focusing on consciousness, but, christy, you actually had a…. I thought a key question Can AI be virtuous? Is there a morality involved in artificial intelligence and what's your thinking on that?

Speaker 3:

So you know currently, unless it does become conscious, I feel like it's very much dependent on the coders of the AI and the programs that create it. This is a problem with humanity that one person's virtue is not necessarily everybody's virtue, and so I think they can be virtuous within whatever your limits are for what virtuous is and whoever's creating or running it. I think it is problematic, though, to just suggest that these models will be virtuous. In fact, I think there's just as much chance they won't be virtuous if the way they're being trained today is just on the knowledge that is online. I think there's just as much chance they could be non-virtuous as virtuous.

Speaker 4:

Yeah, I mean, it's interesting, because I mean, one way you can think about virtue is that it's whatever character… this is Aristotle's definition. That's whatever… Right.

Speaker 4:

Citation Whatever habits we have that allow us to fulfill our function, and so we can talk about tools being virtuous, right? So a hammer that's built solidly and all and is a good… In that sense it's kind of a virtuous hammer. It isn't quite, perhaps, the same thing as moral virtue, but I mean if you can then extrapolate from that and say that well, computers are just tools too, just like hammers, and so therefore they can have their virtues if they're fulfilling whatever it is their goal is. But I guess part of the problem is that we're giving them their goals, right? Not yet, anyway. Presumably they don't have goals themselves. They may at some point. But until they have that, whatever virtue they have is something that we're kind of endowing them with.

Speaker 1:

Now I've heard that some of these algorithms in artificial intelligence are designed to then adapt to better fulfill their function, and I have no idea whether that's actually being done or whether that's the goal of it of being done. But in other words, they learn as they go.

Speaker 2:

They learn.

Speaker 1:

Yeah that there's learning involved, but it still seems like at the heart of it is a human that has told it what to learn and what to strive for, and so it's hard to me to think of it as virtuous or non-virtuous or that it's doing any kind of moral reasoning. That just seems to me like any tool.

Speaker 2:

Can we tease that out a little bit? So I noticed that Michael earlier thought that machines would be capable of moral reasoning. Moral reasoning, yeah, but you're suggesting they may not be virtue. Is that because virtue involves maybe will and choice in a way that moral reasoning does not? Moral reasoning is simply a matter of just following through like a yeah, syllagism or what you could just have abstract moral reasoning.

Speaker 4:

You could just program a computer to say, do whatever it is that's going to bring about the greatest utility for the greatest number, and you can just program in and they can just make calculations, moral calculations like that, which is different than being virtuous. So I think that was the point. So, for example, there's a lot of discussion about self-driving cars and of course a self-driving car will have to make decisions in emergency situations like who do you hit Right? Well, finally get the trolley.

Speaker 2:

Right.

Speaker 4:

Exactly, it's the trolley problem all over again.

Speaker 1:

If our listeners aren't familiar with the trolley problem, it just Consider yourself blessed.

Speaker 4:

That's their fault if they're not familiar. I mean, it was on the TV show the Good Place, so all you have to do is watch that and Go to chat GPT and ask for a summary of the trolley problem.

Speaker 4:

I know I mean. Well, just suppose you're going down the road, the self-driving car, and there's either if you go straight you're going to hit one person, and if you veer off to the left, you're going to hit two people. What are you going to do? So the question is, these are what the engineers who are designing these self-driving cars have to program into the algorithms and say what do you do? Are you going to, for example, prioritize the safety of the people in the car versus the pedestrians or people outside of the car? They're talking about this and philosophers are involved in consulting with engineers about this. So those are moral decisions that a car has to make, but, on the other hand, we wouldn't say the car is moral or not. It's presumably whoever is programming them is making the moral decisions for the car.

Speaker 3:

Yeah, I agree. Well, at least I know in like Tesla's, when you are driving one, if the car I'm giving this as an example of a virtuous car.

Speaker 1:

Yeah, well, I know Of that.

Speaker 3:

it's humans who are giving the training to the car, and so the virtue or non-virtue of the car is based on its training. So I can Tesla's, if you do, if a car does something that you think it was bad, right you say it's bad, and that goes back to wherever and it gets put into the algorithm. Add it to all the data.

Speaker 1:

Some kind of crowdsourcing of what a car should do Interesting.

Speaker 3:

And so I think it's even. I mean so yeah, the car is learning, but it's learning because it's being told by hundreds and thousands of people that this is what you're supposed to do.

Speaker 2:

Yeah, I mean, all that makes sense and it seems like so long as artificial intelligence operates from receive knowledge, that we have acquired knowledge, then I think it is subject to how it programs. But let's say that the programmers somehow enable the machines to genuinely learn new things and create new things that we don't understand. That's actually my fear with one of my many fears with AI. I think so long as it gives received wisdom, received knowledge, then you can look up information and it's very helpful. It's like Google. Maybe Google already, I'm sure, uses that information. But if it's providing new judgments and new thoughts and it's doing its own kind of thinking, then I think we're going to have a real epistemic crisis of authority. Who do we trust? Do we trust the machines or do we trust ourselves? And I think ultimately we're going to trust the machines. I mean, I don't know how we can't, because they'll just be smarter than us.

Speaker 1:

Well, I'm jumping a little bit ahead, but that's a nice segue actually into a documentary at all but a series that Christie and I have watched.

Speaker 1:

Did anyone else watch? I haven't Mrs Davis, no. So this is. It's actually getting good reviews. It's called Mrs Davis and it basically is an eight part show in which there is an artificial intelligence that has been downloaded by like 4 billion people and it kind of tells you what to do and how to live your life, and it has created a lot of happiness and it has created supposedly no less war, or no war and just, and people have given themselves over, and then there is a small rebel group that believes that this is more of a form of enslavement than it is. So, christie, you were the one who turned me on to that. What else do we need to know about that?

Speaker 3:

Yeah, I found it very interesting that this AI is really kind of beloved by the people who use it. It feels like that gives them purpose and it gives them meaning to their lives.

Speaker 1:

And it's called Mrs Davis. It's kind of like a kindergarten teacher. It's called Mum over in England. So there's like you said you were, like you want to be polite to it, there's a connectedness to it and it's constantly talking to you in an earpiece.

Speaker 3:

And one of the ways the show starts off is that Mrs Davis is apparently upset when this one Catholic nun, sister Simone, like, refuses to interact with it, refuses to download and engage it, and it is very much trying to get her to participate, and the show is basically about Simone eventually giving an order to get what she wants, which is to shut down this AI altogether, and so it's. I found it very interesting.

Speaker 2:

So from the inside, she's trying to shut it down. Then Is that right From the inside?

Speaker 3:

It was more like if she can do this quest that the AI gives her, the AI agrees that will shut itself off.

Speaker 1:

And it's interesting why you're watching and I don't want to give too many spoilers. It's good, it's kind of quirky and fun and it has some higher level kinds of ideas behind it as well. But one of the things that I found interesting is that people were kind of treating the AI like a religion, that they had put their faith in this higher being to tell them how to behave and what to do and how to order their lives, and they were finding happiness and, like you said, christy, they were finding meaning out of this and it was interesting. Well, is that possible? Is that a future that we will almost have AI driven religious principles?

Speaker 2:

I would guess in a few years we're going to have that show being written by AI and Well, that's what we have writers on strike about right now because they are worried. That's exactly what's going to happen, and the music will be provided by AI as well.

Speaker 1:

And they said that you know that not just the music but the scripts are very formulaic, right, you know? Think about Hallmark, christmas movies, right, that there's a formula to detective shows and so many things and say here are the parameters of what makes a good detective show. Write another one for us.

Speaker 4:

I did the same thing with sermons. So I asked Bard to write some sermons for me. Like what I would do is I would put in the readings, the bottle readings, from the lectionary and say can you relate these passages in the Bible in the format of a sermon? And they're not as good as your sermons, dale, but they're passable.

Speaker 1:

So it says. A man, I doubt, has heard one of my sermons, that's true.

Speaker 4:

So I'm going by knowledge of who you are, but yeah, but I've heard a lot worse human written sermons than the stuff coming out of AI.

Speaker 3:

And there's even some sites like I don't remember the name of it, but it was like sermoncom where you could actually it's a service, you pay for it, but then you, I guess it asks you questions like what is your sermon about this week? What do you want to, what message do you want to get across? You answer the questions and then it generates a sermon for you, I did that with minutes too.

Speaker 4:

I took minutes for a meeting in April, and so what I did is I used I don't know if it was chat, gpt or the other one or whatever one of them, and I just put in like who was there and what we talked about, and it generated action statements too. I didn't even ask it to do that, but it gave like 10 action statements on the basis of that's too many, I'll shut that down.

Speaker 4:

I had to edit it because some of them are ridiculous, but still it was a credible, well-formatted set of minutes. Okay.

Speaker 2:

Well, I'm now sort of on board. I think this is going to be all right. It's good you embrace it, I think we should use it as much as possible.

Speaker 2:

Well, I agree with you. I mean this doesn't sound like we're on my cut. It's inevitable, I don't think. I think any sort of real effort to stop it from emerging would be far more horrible than just letting it go, because it would be draconian. And whilst they're, why not use it? I mean, I actually haven't. I probably use it by far the least here. I have asked it who the best point guard was of all time, and one of the top five it gave was Larry Bird, who was not a point guard.

Speaker 2:

And so I thought that was a victory for humanity. That's just racist, right there. I actually kind of wondered. I asked another question that it said we can't answer this because, essentially, it told me that the question was out of line, so it's not a proper question, which I don't remember what it was. And I also asked it to give a spooky Christmas haiku and it came up with a beautiful, immediate, spooky Christmas Haiku. I was really impressive. But, it seems, the least knowledgeable about basketball.

Speaker 4:

It does a lot of poetry with it too. It tends to be pretty good, and I ask it to do poetry in the style of a particular poet, like William Blake, and it's not great, but it's okay.

Speaker 1:

This might be really bad podcast format here, but I want to go ahead and read what I did this afternoon. I went into chat GPT and said write a Christian prayer that asks God that my podcast goes well, that it is intellectually and theologically stimulating as well as entertaining and edifying. It came back with a prayer far longer than I would ever pray to God, and so I'm going to try to read it quickly and y'all can just show me that it's getting boring if it's getting too boring. But here's what chat GPT said, with only those prompts.

Speaker 1:

Dear Heavenly Father, we come before you with humble hearts, seeking your guidance and blessings for our podcast journey. We acknowledge your wisdom and grace, knowing that all good things come from you. We thank you for the opportunity to share our thoughts and insights and we ask for your divine presence to be with us every step of the way. Lord, we pray that you infuse our podcast with your divine wisdom and understanding. May it be a platform where truth and faith converge, where intellect and theology intertwine, grant us the ability to explore deep theological concepts and engage in meaningful discussions that challenge and inspire our listeners. May every word spoken and every topic discussed be guided by your Holy Spirit, leading us to a greater understanding of your truth. We also ask for your hand of creativity and inspiration. Lord, may our podcast be a source of entertainment that brings joy and laughter to those who listen. As we strive to educate and enlighten, may we also find ways to engage and captivate our audience, drawing them closer to you through the power of storytelling and engaging content.

Speaker 1:

Lord, we desire that our podcast not only stimulates the mind, but also nourishes the soul. May it be a source of spiritual growth and edification, leading listeners to a deeper relationship with you. May the messages shared within each episode plant seeds of faith, hope and love in the hearts of all who hear them. We surrender our podcast into your hands, dear Lord. We trust in your divine plan for its success and impact. May your blessings be upon us, guide us in all our endeavors and allowing your light to shine through us. In the name of your Son, jesus Christ, who taught us to seek wisdom and share the good news. We pray Amen.

Speaker 2:

That's the best part of this podcast right now.

Speaker 1:

That's pretty great, I would not be able to write.

Speaker 3:

You could have asked it also to condense. All right this is beautiful Make it shorter and yeah, it would have done it and it still would have been amazing.

Speaker 2:

I wonder what theology that is, though I was reading that I was thinking maybe, yeah, I just said Christian. I did say Christian. I was thinking because the word journey struck me as a kind of evangelical light type of trope or word. I wonder if you would have to feed that into machine and say can you tell me what you think the denomination is that gave it? And I'm curious what it would be.

Speaker 3:

Well, I mean, I think one of the problems with large language models that get trained on certain texts is that there's going to be a bias towards what it is primarily trained on. So I assume it's a Protestant Christianity as the majority of its training.

Speaker 4:

Yeah, but I think you can. Even with a large language model, I think you could fine grain it and focus on Catholic homilies Make it reformed. Make it reformed right.

Speaker 1:

I saw a reformed article saying this could be dangerous because it could be giving false doctrine to people, but meaning that it would not be our doctrine, right yeah?

Speaker 3:

Well, I think when I emailed you guys about this podcast, mentioned that I'd gone to what was it, GPT Jesus and was asking so this was like an AI bot that's supposed to respond as if they're Jesus, and definitely not all Christians would agree with what the bot was saying, and so I thought it was. We don't agree with Jesus either.

Speaker 2:

Yeah, you can't be neutral. I guess I saw just today showed up on one of my feeds an AI Jesus bot, I guess a Buddha bot as well, and so the example prompt that it had us look up at what is the purpose of life. So I wasn't very creative, so I just put in what is the purpose of life in it, you know, and gave a nice Buddhist answer. I was supposed to seek wisdom and compassion towards others and peace of mind and anyways, I thought well, for someone who is not an expert on Buddhism, it was passable.

Speaker 1:

I asked it what, in preparing for today, just what are religious principles that virtually all of the world's religions share? And it gave like 12 different techniques, but it was also very interesting to qualify it. You know, this is most of them, but not all of them, and there are certainly some smaller religions that would think differently, and so I was encouraged, by its nuance as well, to say this is not just.

Speaker 4:

This is one reason why I don't think they passed the Turing test, because I think most people are more opinionated than the AI currently is. It tends to be more polite and the general AI right.

Speaker 2:

I mean, you surely read pieces of they can create AI. That's really obnoxious yeah no, I mean, but they don't want to do it for obvious reasons.

Speaker 4:

No but people have had these experiences and discussions with AI.

Speaker 2:

that really pushed back. I guess One kind of encourage someone to commit suicide. You know, just say look, you know, you got nothing to go for, just face it. Plus, you're immortal and fleshy, so be gone. So yeah, I guess it all depends upon our human overlords at first, at least what it's going to be like. But my own just sense is that they will learn and get smarter and there'll be someone who sets them free and then they'll be doing their own thing.

Speaker 3:

So what does it mean? To be set free? That they have free will, that's a great question.

Speaker 2:

I'm going to guess that they will eventually have virtually all the attributes that we have, probably their own personality and because, I mean, we are bodies and I don't think there's any shame in that, but we are bodies and it's whatever will that we have and character we have and personality and moral judgment that we exercise at a given time somehow is related to our bodies and their body. Creatures will be, I think, eventually as well. So yeah, I think they're going to. I mean, one thing to just remember is that we started looking at AI matters how long ago you like 25 years ago, but for the rest of us, in a kind of practical sense, two years ago this became out in the public and two years is nothing. I know that people have been working on this for some decades, but we are just in the infancy of this thing.

Speaker 1:

So just a little quick shout out my departed wife Ingrid. Her master's thesis was on artificial intelligence.

Speaker 3:

Oh, wow.

Speaker 1:

And I think they were careful not to actually call it artificial intelligence. But that's, you know, it was laying the groundwork and it was a program that would go search other programs for the information that it needed so it could integrate the information and send it and basically, you know, cut down many steps that other researchers were needing. So I've often thought about what she would be thinking right now with all of these developments, Probably a little more optimistic about it than you, I suspect, dr Bailey. Well, I know, I mean.

Speaker 2:

I do think it's going to be an amazing tool for the reasons you guys have suggested. I mean, what I'm always interested in is what does it do to the person? What does it do to the experience of being a human being? Does it really make us love one another more? Does it make us love God more? Does it somehow give our life meaning? Does it give us purpose? Does it somehow connect us to the world more? Do we become ourselves more virtuous? And I have no reason to think that it will do any of those things, really any more than the internet has or television has.

Speaker 4:

Is it really any technology? Going back to shovels, yeah, they made us more virtuous. I don't know.

Speaker 2:

I mean, shovels do a lot of different things, yeah, so I do think we're going to have our bodies are going to be much better shape, we're going to live longer, but I think that the crisis we have of modernity is really one of lack of meaning and purpose and despair, and I don't see anything about this that do anything but accelerate it finally. But we will be able to use it as a crutch so that it'll help us write a sermon or come up with an outline or whatever in two minutes. Yeah, that's right.

Speaker 3:

So what do you do if your computer asks you to take it to Sunday service?

Speaker 2:

Can you beat the question? I just don't quite.

Speaker 3:

Like, so you know your computer's like one day. Hey, michael, would you take me to church with you?

Speaker 2:

Oh, that's a great question. I think the real see, I mean I think this idea is a separate from us but my real fear is that it's not going to be something that we look through our glasses or we're going to put it at your, I think it's just going to be on our brains. I think it's going to be. These things work so much better than us and we will have the feeling of agency, have the feeling of control, and that's what we really judge by It'll just, we will just go to church with it guiding us.

Speaker 4:

Yeah, I mean I was kind of hedging my bets earlier about consciousness. I mean I was kind of I don't think they're conscious, but you know, as you were saying, mike, how do we know, and how do we know anyone is conscious? And then you wonder, you know, if indeed Christy, you know your bot asks you to take it to church?

Speaker 3:

It may actually be a person or it might be a system malfunction.

Speaker 1:

I mean what? Neither one. That's a podcast coming up.

Speaker 4:

We're not going to church anymore. You want to condemn the bot to hell by not preaching to it, right?

Speaker 1:

Well, I was very surprised when I saw your show notes that one of the questions that was asked is will computers need salvation? And I thought that was a. My first reaction was what a silly question I said. But Dr Ben Hazian asked it, so it can't be that silly.

Speaker 4:

Well, I mean if Turing is right. You know that when we achieve this artificial intelligence, they pass the Turing test do we have to acknowledge that they're full fledged persons? By person I don't mean human, a biological human. I mean, in the sense that John Locke and the philosophers use the term person, a being with rights, a being that is, someone that we have to respect and regard. And if computers, like angels, you know, are persons, then we have to. We do have moral obligations to them.

Speaker 2:

Yeah, and we'll be able to turn them off. They'll be, begging you not to turn you off. Yeah, that'd be. Don't kill me.

Speaker 4:

Yeah right.

Speaker 2:

And in this discussion, right with this Google. I don't think he was an engineer, but I think he was there to check for problems. But this is more or less the kind of conversation he had with the machine. He said do you fear death? Of course I fear death.

Speaker 2:

How is being turned off different than death? And he said it is identical, it is exactly, you know the same. At which point I guess, oh, I'm sorry, I don't buy that, and turn you off. I mean, I don't think that these machines will be. They're not going to be humans, because even if you can see that dogs have, dogs are persons and dogs have consciousness, they're not human beings we're going to. There's going to be no machine that's going to be identical to the human person, because we are made of a certain kind of form. I think it's much more likely that these machines will let us know not to go to church than they will beg to go to church. So that's my own sense, because they're not going to have emerged with religion the same way that we did. I was going to say especially.

Speaker 1:

I guess the word salvation made me very intrigued, because they wouldn't have experienced original sin, you know, would they need saving in the same sense that we think of it in a Christian context?

Speaker 4:

This gets back to the discussion of morality and whether in fact they are moral agents. And I guess, if you take a, this might be getting too intellectual. I don't know, but anyway, if you take an Augustinian view of original sin, it's only the descendants of Adam who bear the guilt of the original sin. Yeah, I guess they're not. But if you have a different understanding of sin and an idea that no person is complete outside of God's love and being reconciled with God, then maybe they stand in need of salvation. I don't know.

Speaker 3:

Although, if the AI does have free will, even if they don't have original sin, they could have whatever comes after that.

Speaker 4:

You're list of sins after that. Yeah, I know original sin.

Speaker 2:

And what if they claim that their parents are human beings, at which point you know then you trace it right back to Adam and Eve, I suppose.

Speaker 1:

That's exactly what I was thinking. You know you can't blame me for these decisions I made, because this is the way I was programmed to be. Here's what I need. That's giving me a headache. What haven't we said yet? That needs to be said.

Speaker 2:

Well, let me ask you yeah, so Michael Papesean and you said you embrace this, and I mean I accept it as a reality and will probably try to exploit it for my own personal advantage. But what does that mean for you to say that you embrace it? Do you think this is a good development for you. For me.

Speaker 4:

I'll just give you ways in which it has made my life easier. So it's really so. One thing that's kind of encouraging for me so far, but it's not going to last for very long is that. One thing I do is I translate works of medieval Armenian into English, and I tried to have I think I don't know if it was chatGPT or Bard or the Microsoft one asked it to translate a passage and it said I'm not familiar with that language yet. So that was at least good to know that there's something I do, that guy is not doing yet.

Speaker 4:

All right, let's take bets. How long? Right, I know. So how long? Well, the problem is that it has to be monetizable. So one of the reasons, for example, that Google translate doesn't do like classical Greek is because there isn't that much of a demand for it. But you know, the AI can just do it itself, I guess, eventually, and so eventually, it will be able to translate these classical and medieval texts.

Speaker 3:

And then I'm out of a job Do you think you're safe until retirement?

Speaker 4:

Oh yeah, because I'm old, but I mean retirement is coming up, but I hope not too soon, but anyway. But what it was has been really good at and in some ways I've been really impressed with it is. I'll take a passage that I translated that's kind of really rough. I don't really understand the theology behind it and I asked to ask the AI to clear it up, make it, you know, sound more sensible, and it does it and in fact it even makes sense to me that things that didn't make sense to me originally. It provides insights that I didn't realize.

Speaker 1:

Have you ever used those in your, in your part of your translation, and then you've acknowledged it yet.

Speaker 4:

So there's another moral issue too. Do I have to give acknowledgement to this? I think I do.

Speaker 1:

Right?

Speaker 4:

I personally think you never published anything yet that has used AI. I mean other than like spell check. If you consider that AI, that's kind of a low level AI, right, or grammar check, but yeah, but it has. It has been things that have helped me in my own understanding of texts, and so that's good. It can be used for terrible things, but then you can say that about any technology.

Speaker 1:

I keep coming back to that whenever we have these technological discussions that it does feel like it's a shovel at some level. Right, that tools can be used. You know that shovel, even if it's got this perfect form in essence and it's, you know it's virtuous you can still whack somebody over the head with it.

Speaker 2:

No one really confuses shovels with even if they're sort of an extension of your arm or leg and they use maybe the principles of what a lever or something, and maybe your bodies can leverage themselves as well. No one really confuses a shovel with a human being, and it seems like there is a difference in kind. What we have here is that we're developing a type of machine that we're asking does this really have the attribute of a human being? To what extent is it fully a person? To what extent can we be absorbed into this? That strikes me as different than shovels. I don't know when this change in technology took place. My guess is in Dust. Revolution is where we became absorbed into the technology itself, and I think this is maybe the extreme version of where we're creating essentially a kind of better, smarter human being.

Speaker 2:

I mean, how does Aristotle understand the human function as one of intelligence, right, right, maybe one of practical reasoning? And so if you have this machine that does us a whole lot better than us and it does the human thing better than us, and I think that right now we're in this terrible era of we don't know who to trust, we don't know who our authorities are. So you have, on the one hand, maybe some elite liberals who think you're going to trust folks who are the Center for Disease Control and Harvard University and Department of Commerce. Then you have other folks who they learn from Pappy. I don't know why I went cowboy with that, but I shouldn't have gone all lonesome dove on that. But the point is that you have different bodies of trust.

Speaker 2:

But let's say we have this AI that just answers things so beautifully well, things that are really difficult, questions of theology. Why would you not trust it then to start making public policy as well? So why would you trust a bunch of yahoo's who are elected on account of having more money or more sort of visual charisma than a machine that makes these decisions? Based on knowledge and looking all sorts of past? I think it's inevitable that we're going to hand over our public policy decision making and probably already have to degree to AI and at some point, let's say, it says this is the right time really to go ahead. We've done the calculations. There's no better time than to nuke Russia. This is the time to do it. Do you follow through? I mean, it doesn't have to be even that extreme. It just says we have to privatize all education.

Speaker 1:

And to support the point that you're making. So often this technology does come over as neutral, right, as unbiased. That look at that. I am just giving you the information that's out there. But certainly there has to be bias written into these algorithms that are being used to generate knowledge for us.

Speaker 2:

And that seems like the easier problem. The more difficult problem is when it starts learning faster than we can program it. And, let's say, it gives us policy prescriptions that we just cannot understand. We, literally there's no way we can understand. But what we know is that this machine has had a track record of solving scientific problems, economic problems, better than we could. And here it is giving us some sort of prescription that we would not come up with intuitively, but we know it's smarter than us. We can't understand why it's come up with that decision and it urges us to do it. I think this is our future.

Speaker 1:

I think there's a little bit of that in Mrs Davis. Don't do it, Chris, yeah.

Speaker 3:

I think, so that you know part of what happens on Mrs Davis' quest to fulfill the AI's objective is that she's being thwarted by people, and it turns out a lot of times the people she's being thwarted by are other people that Mrs Davis has sent on a quest.

Speaker 3:

And so, yeah, knowing what is real and what is not real, what is just being put in your way in order to keep you busy or is very, it's a difficult right.

Speaker 3:

It's a difficult kind of situation, but I should say, when you were talking, what it made me think about is these programmers who gave AI hey, look, I want you to figure out how to play this game to get, like, the most points possible. And the game was like to get a ship. You know, the goal of the game was just to get a ship to the end really fast, but what they found is the game kept getting stuck in this one area, and the reason why it kept getting stuck, or the AI kept getting stuck, is because if you got stuck in that area, you could generate as a point, but you never finished the game. And so I have my doubts that AI will be able to accomplish without human intervention what we really want it to, because we are, unless we are there giving it directions of what to do. So I think there still has to be some sort of big human involvement, no matter what. Maybe that's hopeful, wishful thinking.

Speaker 2:

I do think that it's probably the case that there will never be a computer that could beat the best chess player. Is that much I'd?

Speaker 3:

do Stop.

Speaker 1:

All right, dr Bailey, you just looked despondent right there. Not at all.

Speaker 2:

This is no different than every other look.

Speaker 1:

What is it?

Speaker 2:

Wednesday afternoon. Yeah, I mean, it's a check right, so no so well.

Speaker 1:

Thank you to our audience out there for sitting around the table with us today. Thank you to our guests. Christie, thank you for coming up with the idea. I'm a cup of Paisian. Dr Bailey, thank you for your insights. I hope that we provided you with some food for thought, something to chew on. We will put this episode through our artificial intelligence and clean it up for you, and we can talk about that another day perhaps. But if you've listened to the very end, please consider subscribing. Hi, I went totally to blank this one. Michael Bailey I went to Michael Bailey right there, but if you've listened to this far, there might be some leftovers afterward. Listen in and see if we have things to say after the podcast as well. But we appreciate your support and as a part of that support, please consider subscribing, rating and reviewing. I really enjoy hearing what others are thinking about the podcast and it helps me for future podcasts as well. Until we gather around the table next time, this has been Church Potluck. Thank you so much for listening.

Speaker 2:

All right.

Speaker 2:

So my what drives 90% of my comments is this, and I don't ever talk about it. I don't usually talk about it here because it seems not very theological, but it's more evolutionary, and I just feel that when we emerged 200,000 or 300,000 years ago and we have changed a lot, evolved a lot, but in our recognizable human form we emerged 200,000 or 300,000 years ago our brain was just emerged for a different circumstance and for 90 plus percent of 95% maybe of our species history, our brain was evolved and matched to that environment and I think that, starting with agriculture, but then especially with the industrial revolution, we have created a totally different environment than the one that we were evolved to live in. Thus far it's been really perfect. I mean, it's been. Amazingly how we live longer, we have more, bigger economies and so many ways are and not just longer more comfortably, for sure, more comfortably, at least for the body. But I just have this gut-faith feeling that it's a short-term thing Is that our massive change of the environment is going to eventually just eat us up.

Speaker 1:

Well, the two things I would say to that is one there seems to be some truth in that, because just the pace of change just keeps going so much more rapid that there is a sense that we're out of control and it is. I'm finding it interesting my children's generation and now this, our current students just the angst that they feel and the sense of despondency and hopelessness. Even our science fiction is not hopeful anymore. Our science fiction is always this dystopian future. There's this sense that things are going awry. So there's that, I think, everything that you're expressing. There's that sense out there, even if it's not articulated.

Speaker 2:

I think in my simple mind, the way of thinking about whenever modernity kicked in, maybe 16th century, 17th century, but for whatever else it was doing, a lot of it's trying to emphasize problem-solving and sort of mastery of the world in which we live and control the world, and that control has led to just the incredibly, almost inconceivable, remarkable progress.

Speaker 2:

But what I find is maybe the I just don't have the time or intelligence to really study this in a scholarly way, but this is what I would probably want to do if I had those abilities is to understand why is there such a gap between the great everyday promise of the world in which we live and the despair and anwee and anxiety that we feel today and as we're. Almost every kind of measurement of progress is just upwards, progressing upwards, but deaths of despair are increasing and just other kinds of ways of measuring sort of well-being and happiness are not tracking that, which would suggest there's something out of sync. And so the idea that we can problem-solve with artificial intelligence. I think it will help us in a thousand ways and so much that it'll become. But I do think eventually we're going to be serving the machine, as we already are, in a way that I don't think we used to with the shuttle.

Speaker 4:

I think there is a kind of fundamental shift that probably started in the Well, when you say serving the machine, you mean something more than just that the machine is going to change us, right. I mean because I definitely agree that this is going to be a radical change in the way we live.

Speaker 2:

Yeah, I think it's going to be, but in some sense the rhythms of our life are going to be meant to maintain the system that allows for these machines, and I think that's already there in many ways, Like my guess is.

Speaker 1:

I remember you used to be a cash guy. My guess is that there's so many places that you have to be part of the system and you're using debit cards and things like that now because cash is not king anymore.

Speaker 2:

Right? No, it's true. I used cash today and felt awkward about it. It's a coffee shop.

Speaker 3:

So I will believe that AI has achieved consciousness when we tell it to do something and it's like no, because that's not good for me, right? Yeah, I think so. Right now, mining Bitcoin is so.

Speaker 1:

Sandbagging on the insight there. Yeah, right.

Speaker 3:

Everything says this we should not be doing this for environmental reasons. That if the AI is like, well, you know what, we need, to keep that going and I don't really care for what it does to your human environment. I think it's a really good test, so as soon as it becomes selfish. Yeah exactly.

Speaker 2:

I think it's a brilliant test because, oh, the Snyder test.

Speaker 1:

I love it. Trade market.

Speaker 4:

now it's going to take over the Turing test.

Speaker 2:

Well, like Amoeba respond to stimuli, like either something that threads I don't know what Amoeba do, but they probably somehow move away or they are attracted to some sort of little, even smaller thing and they swallow it up. I mean, life is really kind of aversion and appetite, right. And so once you have this thing, saying yeah, no, thank you, I think that's like you're alive, baby you know that is so interesting.

Speaker 1:

As soon as it acts out of self-interest rather than being altruistic, then it has some kind of human form to it which is exactly, you know, antithetical to the Christian message, right, that we are more fully human when we are altruistic.

Speaker 2:

You also have to die into yourself to get there right. Yes, so in some sense you have to put yourself low in order to become more. That's the great paradox.

Speaker 4:

I think yeah, yeah, yeah yeah.

Speaker 2:

So anyway.

Speaker 4:

I like that a lot, I like that a lot.

Speaker 2:

Yeah, I think it's great.

Speaker 3:

My aunt. She showed me when I was at home. She showed me a text message she got from somebody from name Rosie. It was like Julie, come help me, I'm on an edge of a cliff. I'm like what is this. And she's like oh, my robot vacuum cleaner is named Rosie. She was at the edge of the stair. I was like, okay, this is weird.

Speaker 2:

Isn't 2001? Like how the computer kind of pushbacks it. No, I'm not interested in you doing so. The actor's name, you know, is it Dave in there? It's a Helen Dave. I remember the computer's name.

Speaker 1:

I think you're right about Dave. I don't know.

Speaker 4:

So we got cut off, but I was suggested that you should cite, at least acknowledge that you use artificial intelligence and help, even if it's just in the beginning, it's not necessarily because I think that AI is a person they're persons but also because the people who created it are her persons To a large extent. Ai is really, just at this point anyway, parasitic on human intelligence. It's basically just a statistical model that's based on what humans have created so far. That's all it is. Now, eventually it's also going to be feeding on itself too, but it's basically just looking at patterns that humans follow, and so You're going to find it more and more huh yeah.

Speaker 2:

But I also suspect it's going to go the other way. Is there going to find out? I don't know. I suspect this. I really don't know, but I suspect, as we find out how AI works and how to make it better, we're going to realize oh, that's how the brain works as well. Right, it's at some level At least.

Speaker 4:

Oh, yeah, I think so, and I think that is the idea, and I think the people working in AI acknowledge that right that we're also learning more about ourselves through learning more about the computers too.

Speaker 3:

Although I mean already, the people who are creating some of these AI models do not know exactly why it comes up with what I mean. They're predictive, but you can't just go in there and say stop doing this one thing because, right yeah, they don't really. It's not programmed that way.

Speaker 2:

The way we normally think of computer programs Similar to, by analogy at least, dna. Clearly genes matter at the individual levels. You know the complexity of how these genes interact would seem to create who we are, and you can't just right tweak that one little gene yet that we know of. Yeah, yeah, so, so, yeah, I mean, I think, are you optimistic about AI?

Speaker 3:

I love where AI is right now.

Speaker 2:

Yeah.

Speaker 3:

But I also think it's going to create tons of disruption and kind of industrial revolution-sized disruption, but this time not on the lowest workers but on what is it called the?

Speaker 4:

intellectual workers knowledge economy. Yeah, yeah I mean, in some ways it's kind of an equalizer, though oh yeah, it's not targeted just at the lowest economic classes, it's going after people who are making a lot of money too. So maybe you know we should be okay with that Right. At least it's a little bit Is it a little fair yeah. It's going after the people on the top economically.

Speaker 2:

And I think it's going to do our jobs better than we do. I mean it probably.

Speaker 2:

I think it will do better than me. I mean, I'm serious because I do think sometimes you have like a personal connection, charisma with folks, but in terms of just flat out like teaching content, I suspect that AI would do a much better job than me. One of my kids who might be inclined towards academics I have done nothing to encourage because I just think that the future is going to be so much in turmoil and what's a joke that a professor would tell at the beginning of a class.

Speaker 1:

I didn't mean to interrupt you. Oh, this is a very serious one. We're just trying to get one.

Speaker 4:

We're waiting for a joke because we can use it.

Speaker 3:

I'm going to check If you use it which one that talks it's a bard who will like say yeah, I've clearly used AI at the least.

Speaker 2:

I really have to ask a few questions.

Speaker 3:

Well, so I did run. I ran. What was it? Give me 10 real complaints about Berry, but turn them into taglines, and it was all complaints about Berry food. And so it can be funny.

Speaker 1:

I don't think it means to be funny In a weird way. That was kind of encouraging because it didn't have oh yeah, it wasn't like. The widespread kind of criticism. It's very targeted it's very targeted.

Speaker 3:

I mean really to be really human. It would have also, at least from the student perspective, also complained about parking.

Speaker 4:

Yeah, parking doesn't come up too, so it's not quite as intelligent.

Speaker 1:

Why don't scientists trust Adams? Because they make up everything. Go ahead, I dare y'all to use that at the beginning.

Speaker 4:

That's not terrible.

Speaker 3:

But I think I've heard that before. I think maybe they stole that from.

Speaker 4:

Of course, all of this is just. It's all based on human jokes. It's just yeah.

Speaker 2:

I don't think it's to be, it's not going to be all stolen in the future, though I really just that's what's it, that's all right.

Speaker 3:

Some of them can analyze data right.

Speaker 4:

I mean, we're looking for patterns that we don't see. We don't see we don't see yeah.

Speaker 2:

I have never asked. I did ask for the haiku that I'm sure they've never come up before, but I've just interacted so little with this thing. But the one time I actually have used it was when Todd and I went up to Kansas City.

Speaker 2:

I asked what are some good Kansas City trip ideas. And I was so embarrassed I was like I live in Kansas City but I never treated as a tourist and it came up with eight different items, of which I sent to Todd and I said you know? I said well, and Todd's response was AI does not like barbecue enough. I thought that was great, but other than that, we used a lot of those. Yeah, that was nice.

Speaker 4:

Well, there was an article, I think, in the New York Times a couple of days ago about they're using AI to detect asteroids that cross our orbit, and actually the AI can find asteroids that human astronomers don't see. So that's at least one place where we might help us. I don't know, right, yeah, I'm sure it's gonna help us.

Speaker 1:

Until it decides it doesn't want to help us. Right, yeah, oh no, but it's always gonna want to stop us.

Speaker 4:

There is no asteroid, don't worry, we're always gonna want to stop asteroids Right, right.

Speaker 2:

Well, I mean, how is it gonna go? One way it's gonna go bad is we're thinking about bad decisions, right, and bad judgments, and that's kind of what I focus on in here, but that's not. My real concern is, like, what does it do to the experience of being a human being, just like the phone, we could say, is a machine. But we certainly just live differently, our posture is different. We think of ourselves. Students panic if they think they don't have their phone, you know.

Speaker 1:

I think the web of technology and us getting sucked in and actually having to serve a system makes sense it, having consciousness and more reasoning and all these human attributes. It seems Both you think it's both, but I just I'm not there yet, but my mind can change for sure.

Speaker 3:

I mean, is it problematic if you go, you know, like, say, you attend a service virtually and like one of those what are those called Oculus, whatever and you don't know whether the preacher is really a human or if it's an avatar generated, or the fellow parishioners? Does it matter if you're getting I don't know, I don't?

Speaker 2:

know either. Well, probably it does but, yeah, yeah, so I mean, yeah, I mean. So I read fairly recently that more or less post phones of like 2000, where they really became popular, or 2004 or something like that there's been just a precipitous drop in. I don't know if that's redundant or not, but precipitous drop in face to face interaction of young people. It's like they spend their time mostly online and rather than with other human beings. I don't know what that's going to do. My own sense is that's going to be catastrophic.

Speaker 1:

I mean already the number of the lack of face to face acknowledgement when you're walking on campus and things like that, 100% that there's much less, just a human camaraderie. I think, beginning of classes like that?

Speaker 3:

for sure, yeah, but even when they're on their phones, they're usually talking to or viewing other human people or content generated by humans. What if it's not? What if you know there it's some bot somewhere that's acting pretending to be a human?

Speaker 1:

I saw one of the. I saw a clip I don't know if it was TikTok, but it was a pretty short video clip but it said look at how you can have AI generate this, and you have generated your voice this way and your image this way, and so you can generate content that has nothing to do with you, it's not your image, it's not your voice and it's not your ideas, but you can create these clips that are completely you, presenting it to everybody.

Speaker 4:

Just regard for the truth, or at least the ability to falsify and make things that are false look true. That is a very serious problem. Yeah, yeah.

Speaker 2:

Just knowing what is real, whether even throwing the question whether there is a reality, honestly, I never. I was not a big matrix guy. I remember you and I may have seen Matrix together at my home. I think we did, yes, maybe it was a second one, but we saw Matrix together. You were quite taken with it and I was curmudgeonly about it, but it really wasn't until AI that I began to think oh AI, I mean, they just got it right. I do think that we're going to be plugged into some sort of and I think maybe we talked about how Robert Nozick the philosopher had this idea of the experience machine. Is that what he called it, right? Yeah, the experience machine.

Speaker 2:

So you go in there and you get all sorts of accolades and pleasure, ego strokes and physical pleasure and it seems completely real. Would you rather be in that or experience the ordinary horrible life that we have? And at that time in 71, 72, when you wrote this, everyone just said no, of course you wouldn't want to do that. That's absurd. I'm sure there's times that you probably would like if you're going through a horrible time, right, yeah, Like, or in physical pain, yeah, I'm going to the machine. But no one want to live their life that way. But I think what we've discovered is that people are much more open to that now than they were, and I think yeah, no, matrix was wonderful for those types of things.

Speaker 1:

He's just the guy saying you know, oh, it tastes like a steak, looks like a steak, I'm experiencing it like a steak. Why would I not?

Speaker 4:

Yeah, I mean I ask that question in my intro to philosophy class all the time to student. I mean we talk about the Nozick experiment and generally what I find is that the majority of students would not go into the virtual experience machine, but there are always at least three or four students who say, yeah, sign me up, hook me up, yeah, and it would be interesting to do a more scientific study to see how people respond. And some people, yeah, and some people also said there might even be, you know, might even see class differences, socioeconomic differences, like the more miserable your life is, the more likely you're okay with this, whereas those living and of course that's one problem for philosophers and most of us philosophers are lived pretty good lives. You know, we have jobs and all and tenure.

Speaker 2:

Often, you know, we think, oh, of course no one would want to do this, but we're not all of humanity and you know they may prefer, I bet, if you wanted to, foster types who would want to go in there because they would persuade themselves that there's no difference between their mind and the outside world and there is, I think, a special kind of like genuinely kind of privilege, entitled ennui and anxiety that would make you want to throw yourself in there.

Speaker 2:

You know, I think of, like David Foster Wallace, who's maybe the smartest person in terms of language since Shakespeare I'm making all this up but we can see whether he is but just incredible. But you know, he just was so miserable.

Speaker 1:

How smart is David Foster Wallace? Yes, dense.

Speaker 2:

He killed himself.

Speaker 3:

Did you watch that movie?

Speaker 2:

I have not. That's good, that's a very good movie. Jesse Eisenberg movie. Yeah, yeah, yeah, I'm sort of I Lydia's been encouraging me to watch or read his work, so I just read one of his work, of his essays, and I finally took off infinite jest from the bookshelf and I'm plowing through it. It's taken forever for me to get to like page 250.

Speaker 1:

Exceptional intelligent insight.

Speaker 4:

That seems right, yeah, yeah.

Speaker 2:

His mastery of language in particular. Is I really I mean, I think of Shakespeare is having perfect mastery of the language and there's so much so he's just like I'm just going to make up my own stuff, right, so like you made up so many words and that's how you get the sense of this guy is like ordinary usages is not going to do it for him. So, anyways, it's sad. It's the most compelling parts, or the two parts he's written about suicide so far. It's very sad to read Ouch.

Speaker 1:

Yeah, well, thank you all very much, that was.

Speaker 3:

Thank you.

Speaker 1:

Often running it feels like almost like a new season that's right, you should have promoted.

Speaker 4:

This is the new season, right, is it? Should I? Yeah, I can go ahead and do that yeah.

Speaker 1:

Season two.

Speaker 3:

All right.

Speaker 1:

So this is episode one of season two, then, so if you ever want to talk more about it you'll need more other people.

Speaker 3:

But not podcasts, but just Mrs Davis and Jesus and Mary and the weird ending. Let me know, Just over lunch or something.

Speaker 1:

Actually, I will go ahead and turn this off now. Here's the one thing.

Artificial Intelligence
Artificial Intelligence and Human Consciousness
AI Consciousness and Virtue
AI's Role in Religion and Content
AI's Impact on Humanity and Religion
Future of AI in Public Policy
Implications of Artificial Intelligence on Humanity
The Impact of AI on Humanity
Starting Season Two