Susan Schneider, philosopher and director of the Center for the Future of Mind, AI & Society, recently highlighted the risk of ethical confusion: prematurely assuming a chatbot is conscious could lead to all sorts of problems.
The problem is that chatbots are great mimics… and so they’re asserting consciousness and people believe them.
For instance, in situations in which we have to balance the moral value of an AI versus that of a human, we might in some cases balance them equally, for we have decided that they are both conscious. In other cases, we might even sacrifice a human to save two AIs.
[And] if we allow someone who built the AI to say that their product is conscious and it ends up harming someone, they could simply throw their hands up and exclaim: “It made up its own mind–I am not responsible.” Accepting claims of consciousness could shield individuals and companies from legal and/or ethical responsibility for the impact of the technologies they develop.
These issues will arise whether or not AI is or can be conscious.
I wonder how to weight ethical confusion, as a risk? As I said yesterday humans are pretty self-centred, and we’re not going to treat AIs, chickens or workers in sweatshops any better just because we are co-sentients.
Schneider highlighted another risk back in 2017 that on the face of it appear more far-fetched but personally I give more weight. What if silicon can never be conscious? Therefore as we start using brain implants, at what point do humans stop being conscious?
machine consciousness could impact the viability of brain-implant technologies, like those to be developed by Elon Musk’s new company, Neuralink. If AI cannot be conscious, then the parts of the brain responsible for consciousness could not be replaced with chips without causing a loss of consciousness. And, in a similar vein, a person couldn’t upload their brain to a computer to avoid death because that upload wouldn’t be a conscious being.
(I highlighted the same quote when I talked about AI sentience and Susan Schneider in 2023).
It’s a slippery slope: let’s say you have a computer chip running a large language model, and some of it is offloaded to a clump of brain tissue. Is that conscious? Instinctively we’d say no. btw hybrid computer chips/brain tissue were built back in 2023 and they can do the audio processing that underpins speech recognition.
But on the other end of things, let’s say you have a human brain with a the very smallest possible implant: if you buy extended cognition, you might call always-on AirPods a minimum viable brain prosthetic, especially if they can sense and respond to brainwaves. So is that “cognitive hybrid” conscious? Yes, we’d instinctively say, it’s just a person with AirPods.
I mean, forget AirPods, I’m 100% sure that even Noland Armagh moving a cursor with a brain-computer interface (NPR) is conscious.
How far can we go? A brain-computer “interface” is just an interface, like a mouse or multitouch, even though it’s inside the skull. Subjectively there’s no difference between raising my arm to catch a ball or “thinking” the cursor to the top of the screen, right? Or “knowing” the date (by thinking) and “knowing” the time (by unthinkingly glancing at the status bar on my ever-present phone).
If these don’t delete consciousness then the conscious “self” is located elsewhere in the brain maybe. Smaller and smaller…
But… there’s a threshold somewhere, we’ve just talked about both ends… so as we load an individual with brain implants to control computers… to speak… control a powered chair… augment memory… is there a line beyond which they are no longer conscious, and we’re granting personhood (ethically, legally) to someone/something that is no longer a person?
Do we declare some legal limit, an arbitrary Karman line of being a p-zombie?
Or the other way round, a Karman line over which a large language model is declared conscious?
(The Karman line is the conventional and imaginary boundary of space, 100km/62 miles straight up.)
It’s a nonsense.
Yet we’ll need answers, for all those pragmatic questions above.
I suspect that we’ll end up with a pragmatic hodgepodge, hammered out one precedent-setting legal decision at a time, in the same way that we assign personhood to corporations because it’s convenient and kinda feels right (in folk understanding Amazon has about the same amount of personhood as an ant), and copyright which is kinda ownership and kinda about incentivising developing ideas and kinda this fair use thing… it’s all a fudge.
But ideally it wouldn’t be a fudge (much).
What this really exposes for me is that we’re going to need a more sophisticated way to think about consciousness…
Back in 2022, OpenAI co-founder Ilya Sutskever tweeted it may be that today’s large neural networks are slightly conscious.
That “slightly” is incredibly load-bearing. What on earth does it mean.
Susan Schneider, philosopher and director of the Center for the Future of Mind, AI & Society, recently highlighted the risk of ethical confusion:
The problem is that chatbots are great mimics… and so they’re asserting consciousness and people believe them.
These issues will arise whether or not AI is or can be conscious.
I wonder how to weight ethical confusion, as a risk? As I said yesterday humans are pretty self-centred, and we’re not going to treat AIs, chickens or workers in sweatshops any better just because we are co-sentients.
Schneider highlighted another risk back in 2017 that on the face of it appear more far-fetched but personally I give more weight. What if silicon can never be conscious? Therefore as we start using brain implants, at what point do humans stop being conscious?
(I highlighted the same quote when I talked about AI sentience and Susan Schneider in 2023).
It’s a slippery slope: let’s say you have a computer chip running a large language model, and some of it is offloaded to a clump of brain tissue. Is that conscious? Instinctively we’d say no. btw hybrid computer chips/brain tissue were built back in 2023 and they can do the audio processing that underpins speech recognition.
But on the other end of things, let’s say you have a human brain with a the very smallest possible implant: if you buy extended cognition, you might call always-on AirPods a minimum viable brain prosthetic, especially if they can sense and respond to brainwaves. So is that “cognitive hybrid” conscious? Yes, we’d instinctively say, it’s just a person with AirPods.
I mean, forget AirPods, I’m 100% sure that even Noland Armagh moving a cursor with a brain-computer interface (NPR) is conscious.
How far can we go? A brain-computer “interface” is just an interface, like a mouse or multitouch, even though it’s inside the skull. Subjectively there’s no difference between raising my arm to catch a ball or “thinking” the cursor to the top of the screen, right? Or “knowing” the date (by thinking) and “knowing” the time (by unthinkingly glancing at the status bar on my ever-present phone).
If these don’t delete consciousness then the conscious “self” is located elsewhere in the brain maybe. Smaller and smaller…
But… there’s a threshold somewhere, we’ve just talked about both ends… so as we load an individual with brain implants to control computers… to speak… control a powered chair… augment memory… is there a line beyond which they are no longer conscious, and we’re granting personhood (ethically, legally) to someone/something that is no longer a person?
Do we declare some legal limit, an arbitrary Karman line of being a p-zombie?
Or the other way round, a Karman line over which a large language model is declared conscious?
(The Karman line is the conventional and imaginary boundary of space, 100km/62 miles straight up.)
It’s a nonsense.
Yet we’ll need answers, for all those pragmatic questions above.
I suspect that we’ll end up with a pragmatic hodgepodge, hammered out one precedent-setting legal decision at a time, in the same way that we assign personhood to corporations because it’s convenient and kinda feels right (in folk understanding Amazon has about the same amount of personhood as an ant), and copyright which is kinda ownership and kinda about incentivising developing ideas and kinda this fair use thing… it’s all a fudge.
But ideally it wouldn’t be a fudge (much).
What this really exposes for me is that we’re going to need a more sophisticated way to think about consciousness…
Back in 2022, OpenAI co-founder Ilya Sutskever tweeted That “slightly” is incredibly load-bearing. What on earth does it mean.