Tech News

What would it take to convince a neuroscientist that AI is conscious?

Great language models like chatgpt are designed to be human. The more you engage with these ais, the easier it is to convince yourself that they know, just like you.

But do you really know? I’m sure it sounds like you are, but how do you really know? What does awareness mean, anyway? Neuroscientists have been working to answer these questions for decades, and they still haven’t come up with a single, universally accepted one – let alone the experimental method.

However, as AI is increasingly integrated into everyday life, one has to wonder: Is it possible for these bots to get to know themselves? And if so, how would we know?

In this Giz ask, we asked neuroscientists what it would take to convince them that AI is conscious. Each of them highlighted various obstacles currently standing in the way of proving this hypothesis, emphasizing the need for more research to be the basis of knowledge itself.

Megan Peters

Associate professor of cognitive science and logic and philosophy of science at the University of California, Irvine. Peters is also a research fellow at the Canadian Institute for Advanced Research Program in Brain, Mind, and Cognition, and serves as President and Board Chair at NeuroMatch. His research investigates how the brain represents and uses uncertainty.

The difficult thing is to convince a neuroscientist (or anyone) that AI realizes that there is no objective test of consciousness, and to create that test may be fundamental.

Man-made awareness of the inner, the self, and the subjective. No one can look inside your head to find out if you know what you know! Instead, we depend on external factors – such as behavior or brain activity, to reduce other people know because we see the same even though others behave like us.

With AI, we cannot make the same assumption. Instead, we can only create “experiments” that reinforce our implicit belief that AI is conscious. We can look for signatures in AI architecture, internal working patterns, or behaviors that lead us to believe that “the person is there,” but remember, beliefs are not facts. Just because you believe the sun moves around the earth, that doesn’t mean it’s true!

Let’s say we create experiments to strengthen our belief that AI can be conscious. We still need to test the right kind of thing.

We don’t want to test intelligence or human-like responses (like the Thumb test does), or whether the AI ​​is a threat (Skynet might be more alarmed if it were an unpleasant automaton though. And we can’t just question the AI ​​if we’re just noticing – how can we interpret its response in any way? Maybe it’s lying, or just sharp mathematical patterns in human-made text.

Recently, I and others have proposed ways to start building these “belief tests” for AI Consciousness. I am also working on how we can determine whether human-based tests can be applied to other systems, including AI.

For example, we can test cognition in a hospital setting by measuring brain activity in response to commands such as, “Think about playing tennis” or “Think about walking around your house.” But this kind of testing doesn’t work for AI because AI doesn’t have “brain activity patterns.” (Unfortunately, using this test with octopuses or chickens is also wrong, because they don’t understand language and/or can’t think about things!)

Another “Experiment” would ask whether AI exhibits the kind of cognitive integration that researchers think is most involved in creating consciousness. This may be very suitable for AI, but we have yet to identify the right kinds of intelligence – the critical integration of cognition.

Even with all this, we still have to remember that what we love most is asking whether AI has such knowledge, not just whether we believe it. We may not arrive with that kind of certainty, and in the meantime, we need to be careful not to create our belief in AI Consciousness with a statement of objective truth.

Anil Seth

Director of the Sussex Center for Cognitive Science and professor of cognitive neuroscience and computational neuroscience at the University of Sussex. Seth is also the director of the Canadian Program for Advanced Research (CIFAR) in Brain, Mind, and Cognition. His research seeks to understand the neurobiological basis of consciousness.

A very difficult question. While we have learned a great deal about the neurobiological basis of consciousness in recent decades, there is no consensus on the necessary or sufficient conditions for consciousness. Therefore, anyone who wants to “know AI” may be possible (or close, or here already), or definitely not possible that can be said.

This doubt is not unusual. Uncertainty is inherent in science, but the level of uncertainty associated with this question is perhaps unusually high, given the wide range of misconceptions about the likelihood of actual involvement.

So, what could I be sure the AI ​​was aware of? Of course, I can’t be sure about the discussions that have ever come more about awareness of the major linguistic models. I think that the tendency of consciousness to project on language models mainly reflects our reflection of psychological personality, rather than a reliable understanding of what is going on.

For me, the main question is: Is Ai like them to be the needle in our belief that “Ai” is possible? Many investigators think it’s just a matter of getting the “combination” right. In this view, the consciousness of the real brain is derived from “Aural Complates,” but it can equally arise from the same – or sufficiently – similar collection made in Silicon. (In philosophy, this position – usually taken fully – is called “computational operation”).

I strongly suspect this idea. The more you look inside the actual brain, the more likely it is that the integration is all that important. My personal opinion is that detailed biological information – such as metabolism and autopoiesis – may be necessary (though not sufficient) to know. If this is on the right track, silicon-based AI is off the table, no matter how smart it is.

For me to be absolutely certain, I need a clear vision of the conditions sufficient to know, and a clear sense that Ali satisfied those conditions. They might turn out to be scumbags just for being, but I think it’s unlikely. Alternatively, they may turn into other biological structures, which I think is very possible. The main thing, just simulating these places on old computers will not be enough.

But absolute certainty is a high bar. Simply put, my strategy goes like this: The more we understand about awareness in situations where we know it exists, the more our surter will be elsewhere. And really, really, we shouldn’t be trying to create a conscious AI anyway.

Michael Graziano

Professor and Researcher at the Princeton Neuroscience Institute studying the brain basis of consciousness.

The question is tricky. If it means: What convinced me that AI has a michical separation of information arising from its internal decisions? After that nothing could convince me. There is no such thing. And people don’t have it.

Almost all work in today’s field of cognitive studies is pseudoscience, focusing on the idea that we must discover how the core of magical experience arises from humans or other agents. Kind of like, “Where’s the big hole in the ground, in the west, where the sun disappears at night?” or, “Who drives the chariot of the sun as it passes through the sky?” The question itself is flawed.

The human brain can prepare itself. The model of self-absorption is that we have a magical sense of experience. Or rather, the self-model is a useful, but structured description of the self. Instead of a realistic representation of 86 neurons and their interactions, the model represents an abstract, magical core. Everything we think we know about ourselves – everything, whether we are ourselves or how true we are – depends on models (or bundles of information stored in the brain)

Now, if you ask me: What can you assure me that AI has the same kind of preparation for humans, and therefore and with the same understanding it is conscious? It’s easy to answer, at least in principle.

We do not yet know the details of the population model, so comparisons are difficult at this time. But AI has an active feature – we can look inside the black box and see what representations, models, bundles of information are included or embedded within its neural networks. So-called “Mech-Interp,” or the interpretation of work patterns within AI, is still in its early days but is gaining momentum.

I have shown that AI creates a stable model, that self-moderati model AI as self-aware, and that expression has the same characteristics as a human model, and I will accept that you have an AI that believes what it believes.

Model of choice is everything. It shapes our personality – our self, social, and cognitive understanding. He is your model. I think it’s probably a good idea, and maybe even inevitable, to offer a strong choice model.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button