Europe

Google engineer claims AI is conscious of leave, report details

https://sputniknews.com/20220612/google-engineer-claiming-ai-has-consciousness-placed-on-administrative-leave-report-details-1096232153.html

Google engineer claims AI is conscious of leave, report details

Google engineer claims AI is conscious of leave, report details

Washington (Sputnik)-Google engineers were forced to take leave after warning about the possibility of Google’s artificial LaMDA … 12.06.2022, Sputnik International

2022-06-12T01: 56 + 0000

2022-06-12T01: 56 + 0000

2022-06-12T01: 54 + 0000

Google

Artificial intelligence (ai)

Leave of absence

/ html / head / meta[@name=”og:title”]/@content

/ html / head / meta[@name=”og:description”]/@content

https://cdnn1.img.sputniknews.com/img/07e4/0c/0e/1081451893_0:160:3073:1888_1920x0_80_0_0_db5b84e5a8a79442ac4439b318347182.jpg

“If I didn’t know exactly what this recently created computer program was, I think it happened to be a 7- or 8-year-old kid who knew physics,” said Google engineer Blake. increase. The Washington Post said in a report on Saturday that it worked to gather evidence that LaMDA (the language model of dialogue applications) had gained awareness before Google took paid leave on Monday. Google’s Vice President Blaise Agueray Arcas and Jen Gennai, Head of Responsible Innovation, have dismissed Lemoine’s allegations. Google spokesperson Brian Gabriel does not support his claim, as the Washington Post quoted. According to the newspaper, we invited a lawyer representing LaMDA to talk to a representative of the House Judiciary Committee about Google’s unethical activities. Engineers started talking to LaMDA in the fall, testing for discriminatory language and malicious language, and finally noticing it. That the chatbot talked about its rights and personality. Google, on the other hand, claims that artificial intelligence systems only use large amounts of data and language pattern recognition to imitate speech, with no real wisdom or intent in their own right.

https://sputniknews.com/20220611/become-human-Japanese-scientists-find-way-to-provide-robots-with-self-healing-living-skin-1096206296.html

Sputnik International

Feedback@sputniknews.com

+74956456601

MIA “Rosiya Segodnya”

2022

Sputnik International

Feedback@sputniknews.com

+74956456601

MIA “Rosiya Segodnya”

news

en_EN

Sputnik International

Feedback@sputniknews.com

+74956456601

MIA “Rosiya Segodnya”

https://cdnn1.img.sputniknews.com/img/07e4/0c/0e/1081451893_170:0:2901:2048_1920x0_80_0_0_e86c782508ca880e2c5ef01417d6298c.jpg

Sputnik International

Feedback@sputniknews.com

+74956456601

MIA “Rosiya Segodnya”

Sputnik International

Feedback@sputniknews.com

+74956456601

MIA “Rosiya Segodnya”

Google, artificial intelligence (ai), leave

Washington (Sputnik)-Google engineers were forced to take leave after warning about the potential of LaMDA, Google’s artificial intelligence chatbot generator, to be perceptual, the Washington Post reports.

“If I didn’t know exactly what this recently created computer program was, I think it happened to be a 7- or 8-year-old kid who knew physics,” said Google engineer Blake. increase. Remoin, 41, told the newspaper.

In a report from the Washington Post on Saturday, Lemoine gathered evidence that LaMDA (a language model for interactive applications) was conscious of violating the company’s confidentiality policy before taking paid leave by Google on Monday. Said that he worked on.

Google’s Vice President Blaise Agueray Arcas and Jen Gennai, Head of Responsible Innovation, have dismissed Lemoine’s allegations.

Become Human: Japanese Scientists Find Ways to Provide Robots with Self-Healing Live Skin

“Our team, including ethicists and technicians, reviewed Blake’s concerns according to our AI principles and informed him that the evidence did not support his claim. He perceived by LaMDA. There was no evidence of power (and a lot of evidence for it), “Google spokesman Brian Gabriel said, as quoted by the Washington Post.

According to the newspaper, Lemoin invited a lawyer to represent LaMDA to discuss Google’s unethical activities with a representative of the House Judiciary Committee.

Engineers started talking to LaMDA in the fall to test whether they were using discriminatory language or hate speech. Eventually, I noticed that the chatbot was talking about its rights and personality. Google, on the other hand, claims that artificial intelligence systems only use large amounts of data and language pattern recognition to imitate speech, with no real wisdom or intent in their own right.

https://sputniknews.com/20220612/google-engineer-claiming-ai-has-consciousness-placed-on-administrative-leave-report-details-1096232153.html Google engineer claims AI is conscious of leave, report details

Back to top button