Does conscious AI deserve rights?
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world’s top minds at Big Think Edge: https://bigth.ink/Edge
———————————————————————————-
Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
———————————————————————————
TRANSCRIPT:
RICHARD DAWKINS: When we come to artificial intelligence and the possibility of their becoming conscious, we reach a profound philosophical difficulty. I am a philosophical naturalist; I’m committed to the view that there is nothing in our brains that violates the laws of physics, there’s nothing that could not, in principle, be reproduced in technology. It hasn’t been done yet; we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human-made robot is capable of consciousness and of feeling pain.
BABY X: Da. Da.
MARK SAGAR: Yes, that’s right. Very good.
BABY X: Da. Da.
MARK SAGAR: Yeah.
BABY X: Da. Da.
MARK SAGAR: That’s right.
JOANNA BRYSON: So, one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually, the best one is something like ‘Scientists show that AI is sexist and racist and it’s our fault,’ which, that’s pretty accurate because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so humanlike that it’s picked up our prejudices and whatever and it’s just vectors. It’s not an ape, it’s not going to take over the world, it’s not going to do anything, it’s just a representation, it’s like a photograph. We can’t trust our intuitions about these things.
SUSAN SCHNEIDER: So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn’t be surprising if within the next 30 to 80 years we start developing very sophisticated general intelligences. They may not be precisely like humans, they may not be as smart as us, but they may be sentient beings. If they’re conscious beings, we need ways of determining whether that’s the case. It would be awful if, for example, we sent them to fight our wars, force them to clean our houses, made them essentially a slave class. We don’t want to make that mistake, we want to be sensitive to those issues, so we have to develop ways to determine whether artificial intelligence is conscious or not.
ALEX GARLAND: The Turing Test was a test set by Alan Turing, the father of modern computing. He understood that at some point the machines they were working on could become thinking machines as opposed to just calculating machines and he devised a very simple test.
DOMHNALL GLEESON (IN CHARACTER): It’s when a human interacts with a computer and if the human doesn’t know they’re interacting with a computer the test is passed.
DOMHNALL GLEESON: And this Turing Test is a real thing and it’s never, ever been passed.
ALEX GARLAND: What the film does is engage with the idea that it will, at some point, happen. The question is what that leads to.
MARK SAGAR: So, she can see me and hear me. Hey, sweetheart, smile at Dad. Now, she’s not copying my smile, she’s responding to my smile. We’ve got different sorts of neuromodulators, which you can see up here. So, for example, I’m going to abandon the baby, I’m just going to go away and she’s going to start wondering where I’ve gone. And if you watch up where the mouse is you should start seeing cortisol levels and other sorts of neuromodulators rising. She’s going to get increasingly—this is a mammalian maternal separation distress response. It’s okay, sweetheart. It’s okay. Aw. It’s okay. Hey. It’s okay.
RICHARD DAWKINS: This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so, this moral consideration of how to treat artificially…
Read the full transcript at https://bigthink.com/videos/does-conscious-ai-deserve-rights
Watch the newest video from Big Think: https://bigth.ink/NewVideo
Learn skills from the world’s top minds at Big Think Edge: https://bigth.ink/Edge
———————————————————————————-
Does AI—and, more specifically, conscious AI—deserve moral rights? In this thought exploration, evolutionary biologist Richard Dawkins, ethics and tech professor Joanna Bryson, philosopher and cognitive scientist Susan Schneider, physicist Max Tegmark, philosopher Peter Singer, and bioethicist Glenn Cohen all weigh in on the question of AI rights.
Given the grave tragedy of slavery throughout human history, philosophers and technologists must answer this question ahead of technological development to avoid humanity creating a slave class of conscious beings.
One potential safeguard against that? Regulation. Once we define the context in which AI requires rights, the simplest solution may be to not build that thing.
———————————————————————————
TRANSCRIPT:
RICHARD DAWKINS: When we come to artificial intelligence and the possibility of their becoming conscious, we reach a profound philosophical difficulty. I am a philosophical naturalist; I’m committed to the view that there is nothing in our brains that violates the laws of physics, there’s nothing that could not, in principle, be reproduced in technology. It hasn’t been done yet; we’re probably quite a long way away from it, but I see no reason why in the future we shouldn’t reach the point where a human-made robot is capable of consciousness and of feeling pain.
BABY X: Da. Da.
MARK SAGAR: Yes, that’s right. Very good.
BABY X: Da. Da.
MARK SAGAR: Yeah.
BABY X: Da. Da.
MARK SAGAR: That’s right.
JOANNA BRYSON: So, one of the things that we did last year, which was pretty cool, the headlines, because we were replicating some psychology stuff about implicit bias—actually, the best one is something like ‘Scientists show that AI is sexist and racist and it’s our fault,’ which, that’s pretty accurate because it really is about picking things up from our society. Anyway, the point was, so here is an AI system that is so humanlike that it’s picked up our prejudices and whatever and it’s just vectors. It’s not an ape, it’s not going to take over the world, it’s not going to do anything, it’s just a representation, it’s like a photograph. We can’t trust our intuitions about these things.
SUSAN SCHNEIDER: So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn’t be surprising if within the next 30 to 80 years we start developing very sophisticated general intelligences. They may not be precisely like humans, they may not be as smart as us, but they may be sentient beings. If they’re conscious beings, we need ways of determining whether that’s the case. It would be awful if, for example, we sent them to fight our wars, force them to clean our houses, made them essentially a slave class. We don’t want to make that mistake, we want to be sensitive to those issues, so we have to develop ways to determine whether artificial intelligence is conscious or not.
ALEX GARLAND: The Turing Test was a test set by Alan Turing, the father of modern computing. He understood that at some point the machines they were working on could become thinking machines as opposed to just calculating machines and he devised a very simple test.
DOMHNALL GLEESON (IN CHARACTER): It’s when a human interacts with a computer and if the human doesn’t know they’re interacting with a computer the test is passed.
DOMHNALL GLEESON: And this Turing Test is a real thing and it’s never, ever been passed.
ALEX GARLAND: What the film does is engage with the idea that it will, at some point, happen. The question is what that leads to.
MARK SAGAR: So, she can see me and hear me. Hey, sweetheart, smile at Dad. Now, she’s not copying my smile, she’s responding to my smile. We’ve got different sorts of neuromodulators, which you can see up here. So, for example, I’m going to abandon the baby, I’m just going to go away and she’s going to start wondering where I’ve gone. And if you watch up where the mouse is you should start seeing cortisol levels and other sorts of neuromodulators rising. She’s going to get increasingly—this is a mammalian maternal separation distress response. It’s okay, sweetheart. It’s okay. Aw. It’s okay. Hey. It’s okay.
RICHARD DAWKINS: This is profoundly disturbing because it goes against the grain to think that a machine made of metal and silicon chips could feel pain, but I don’t see why they would not. And so, this moral consideration of how to treat artificially…
Read the full transcript at https://bigthink.com/videos/does-conscious-ai-deserve-rights