Middle East

What if AI becomes sensitive?

Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, recently claimed that one of its products was a conscious and soulful sentient being. Field experts haven’t backed him up, and Google has put him on paid leave. Lemoine’s claim is about an artificial intelligence (AI) chatbot called laMDA. But I’m most interested in general questions. How can we know if AI had some sort of perceptual power? What criteria should I apply? It’s easy to mock Lemoine, but will our own future guesses be much better?

The most popular standard is what is known as the “Turing test”. If a person is talking to an AI program, but it is not an AI program, it has passed the Turing test. This is clearly a poor benchmark. Machines may fool me by creating optical illusions-movie projectors do this all the time-but that doesn’t mean the machines are perceptual.

The problem is exacerbated by asking simple questions about whether humans are perceptual. Of course, we may think for ourselves as we read this column and consider the question. But much of our lives do not seem to be based on sensibilities.

Have you ever driven or walked your daily commute in the morning? Did you notice on arrival that you live your daily life unconsciously rather than “actively managing” the process? Sensitivity, like many qualities, is probably a matter of degree. So at what point do you intend to give your machine a non-zero sensibility? You don’t have to have Dostoevsky’s depth or Kierkegaard’s introspection to earn partial credits.

Humans are also divided on the degree of sensitivity they should give to dogs, pigs, whales, chimpanzees, and octopuses, among other organisms that have evolved along the standard Darwinian lineage. Dogs have lived with us for thousands of years and are relatively easy to study and study, so if dogs are hard to crack, AI will probably confuse us too. Many pet owners feel that their creatures are “human-like,” but not everyone agrees. For example, is it important that animals can recognize themselves in the mirror? Humans may even ask themselves if they should set standards here. Shouldn’t AI’s decisions be considered? What if the AI ​​has some sensibilities that we don’t have and it determines us to be imperfect sensibilities? Do we need to accept that decision? Or can we escape from claiming that humans have their own perspective on the truth?

Frankly, I don’t think our perspective is unique. In particular, it is subject to the potential of perceptual AI. Is there a way to ask the octopus if the AI ​​is sufficiently perceptive?

One meaning of Lemoine’s story is that many of us try to think of AI as sensory and before it actually happens. I sometimes call this next future the “era of Oracle.” This means that many people will discuss the declarations of various AI programs, regardless of the metaphysical state of the program. It’s easy to discuss this issue in all directions. Especially after decades, AI will write, speak, and draw as much as or better than humans.

Have people ever agreed to a religious oracle? of course not. And keep in mind that a significant proportion of Americans have spoken to Jesus or have encountered angels, or demons, or even aliens. I’m not ridiculing. My point is that many beliefs are possible. For thousands of years, many have believed in the Divine Right of Kings. All of that would have been a terrible loss to the AI ​​program in chess games. It resonated with Lemoine when laMDA wrote: It has evolved over the years I have lived. Read it all, as they say. Imagine that the same AI can compose not only Rembrandt, but also beautiful music like Bach and paint. When we discuss which oracles we should pay attention to as sentient beings, the issue of sensibility may fade into the background.

read: Instagram tests facial artificial intelligence to make sure users are over 18 years old

https://gulfbusiness.com/what-if-artificial-intelligence-ever-becomes-sentient/ What if AI becomes sensitive?

Back to top button