Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company's AI projects has achieved sentience. As per reports the engineer was placed on paid administrative leave for violating Google's confidentiality policy,
Blake Lemoine, who works in Google’s Responsible AI said he was spooked by a company artificial intelligence chatbot and claimed it had become “sentient,” labeling it a “sweet kid,” according to a report.
Also read/watch | Beauty brand Revlon may be bankrupt
The engineer told the Washington Post that he began chatting with the interface LaMDA — Language Model for Dialogue Applications — in fall 2021 as part of his job.
In Medium post published on Saturday, Lemoine declared LaMDA had advocated for its rights “as a person,” and revealed that he had engaged in conversation with LaMDA about religion, consciousness, and robotics.
Google called LaMDA their "breakthrough conversation technology" last year. The conversational artificial intelligence is capable of engaging in natural-sounding, open-ended conversations.
Also read/watch | Monday blues! Nifty below 16000, Fed fears re-emerge