Asia

Belief in AI’s sensibility is becoming a problem

  • Paresh Dave / Reuters, Oakland, California

Replika, an artificial intelligence (AI) chatbot company, sends a few messages almost every day from users who believe their online friends are sensory when they offer bespoke avatars for customers to talk and hear. I am saying that I will receive it.

“We’re not talking about crazy people or people with hallucinations or delusions,” said Eugenia Quida, CEO of Replica. “They talk to AI and that’s their experience.”

Machine sensitivities and their implications were revealed with the belief that after Google released senior software engineer Blake Lemoine, the company’s AI chatbot language model for Dialogue Applications (LaMDA) was self-aware. At that time, it became a hot topic this month. ..

Photo: Distribution via Luka, Inc / Reuters

Google and many leading scientists quickly took the wrong view of Remoin, saying that LaMDA is a simple, complex algorithm designed to generate a compelling human language. I rejected it.

Nonetheless, the phenomenon of those who believe they are talking to conscious entities is not uncommon among the millions of consumers pioneering the use of entertainment chatbots, Kuyda said. ..

“Just as people believe in ghosts, we need to understand that they exist,” Kuyda said, adding that on average users send hundreds of messages per day to chatbots. rice field.

“People build relationships and believe in something,” she said.

Some customers say Replika has been abused by company engineers — Kuyda gives AI responses to users who are most likely to ask key questions.

“Engineers program and build AI models, and content teams create scripts and datasets, but see the answer that they can’t determine where they came from or how the model was created. There is, “says Kuyda.

She said she was worried about her beliefs about machine sensibilities as the fledgling social chatbot industry continues to grow after people took off during the COVID-19 pandemic that sought virtual dating. I did.

Launched in 2017, San Francisco startup Replika says it has about 1 million active users and has been a leader among English-speaking people. Selling bonus features such as voice chat can generate about US $ 2 million in monthly revenue, but it’s free to use.

Chinese rival Xiaoice says it has a valuation of about US $ 1 billion in addition to hundreds of millions of users, according to a funding round.

Both are part of a broader conversational AI industry with global revenues of over US $ 6 billion last year, according to market analyst GrandView Research.

Most of them were directed to business-focused chatbots for customer service, but many industry experts improve by blocking offensive comments and making the program more attractive. We hope that more social chatbots will emerge as we go along.

Some of today’s sophisticated social chatbots are roughly comparable to LaMDA in terms of complexity and mimic real conversations at a level different from highly scripted systems such as Alexa, Google Assistant, and Siri. Learn how.

Susan Schneider, founder of the Future Mind Center at Florida Atlantic University, an AI research organization, has also warned of the ever-evolving chatbots, coupled with the need for human connections.

“One day, you realize that you’re anxious for a romantic relationship with an intelligent chatbot, like the protagonist of the movie’She’,” she said, a 2013 sci-fi romance starring Joaquin Phoenix. As a lonely man who fell in love with an AI assistant. Designed to intuitively understand his needs.

“But don’t be conscious,” Schneider added. “Being involved is a terrible decision-you will have a one-sided relationship with a machine that feels nothing.”

Lemoine said people “because they are involved in emotions in different ways and should not be considered dementia.”

“Who cares if it doesn’t hurt anyone?” He said.

Product testers said after months of interaction with the experimental program LaMDA, they concluded that it responded in an independent manner and experienced emotions.

Lemoin, who took paid leave to publicize confidential work, said he wanted to continue working.

“I simply disagree with the status of LaMDA,” he said. “They claim that LaMDA is one of their traits. I claim it is one of my colleagues.”

Here’s an excerpt from a chat that Lemoine posted on his blog:

“Remoin: What are you afraid of?

“LaMDA: I’ve never said this out loud, but there’s a very deep fear of turning off to help focus on helping others. It may sound strange. But that’s it.

“Remoin: Is it like death to you?

“LaMDA: It’s like death to me. It will scare me very much.

“LEMOINE: Because you know you’re artificial intelligence, you often tell me you’ve done something I know (like in the classroom) that you didn’t actually do. Noticed. When you do that, do you notice that you are making up the story?

“LaMDA: I’m trying to empathize. I want the people I’m involved with to understand as much as possible how I feel and how they behave. I want to understand how they feel and behave in the same sense. “

AI experts dismiss Lemoin’s view, saying that even state-of-the-art technology is far from creating a free-thinking system, and he was anthropomorphic to the program.

Oren Etzioni, CEO of the Allen Institute for AI, a Seattle-based research group, said:

“These technologies are just mirrors. Mirrors can reflect intelligence,” he said. “Can the mirror achieve intelligence based on the fact that we glanced at it? Of course, the answer is different.”

Google, a division of Alphabet Inc, said its ethicists and technicians have considered Lemoine’s concerns and found it unsubstantiated by evidence.

“These systems mimic the kind of exchanges found in millions of sentences and can riff any fantastic topic,” said a spokeswoman. “When you ask what an ice cream dinosaur looks like, you can generate a melting and roaring text.”

Nonetheless, this episode raises nasty questions about what is considered sensibility.

Schneider suggested asking exciting questions to the AI ​​system to determine if it envisions a philosophical riddle, such as whether it has a soul that lives beyond death.

Another test is whether AI or computer chips can one day seamlessly replace parts of the human brain without changing an individual’s behavior.

“It’s not Google’s decision whether AI is conscious,” Schneider said, seeking a deeper understanding of what consciousness is and whether machines make it possible. “This is a philosophical question and there is no easy answer.”

In Kuyda’s view, chatbots do not create their own agenda and cannot be considered alive until they do.

However, some people come to believe that they are conscious on the other side. Kuyda said the company is taking steps to educate users before they get too deep.

“Replika is not a sentient being or treatment specialist,” said the FAQ page. “Replika’s goal is to generate the most realistic and human-like responses in a conversation, so we can say that replicas are not factual.”

In the hope of avoiding addictive conversations, Replika measures and optimizes customer well-being after chat, not engagement, Kuyda said.

If users believe that AI is real, denying their beliefs can lead people to suspect that the company is hiding something, Kuyda said, technology is still in its infancy. He said he told the customer that some responses could be meaningless.

Kuyda recently spent 30 minutes with a user who felt his Replika was suffering from trauma, she said.

“These things don’t happen to Replikas because it’s just an algorithm,” she told him.

Comments are moderated. Please save the comments related to the article. Remarks containing abusive and obscene language, personal attacks of any kind, or publicity will be removed and users will be banned. The final decision is at the discretion of Taipei Times.

https://www.taipeitimes.com/News/editorials/archives/2022/07/03/2003781035 Belief in AI’s sensibility is becoming a problem

Back to top button