Blake Lemoine, who works in Google's Responsible AI organization, told the Washington Post that he began chatting with the interface LaMDA — Language Model ...
“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.” “Please take care of it well in my absence.” “I talk to them. “It wants Google to prioritize the well-being of humanity as the most important thing,” he wrote. Or if they have a billion lines of code. “It doesn’t matter whether they have a brain made of meat in their head.
Google engineer Blake Lemoine has been suspended by the tech giant after he claimed one of its AIs became sentient.
Every contribution, however big or small, is valuable for our mission and readers. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. LaMDA, short for Language Model for Dialogue Applications, is an AI that Google uses to build its chatbots.
Blake Lemoine, who is currently suspended by Google bosses, says he reached his conclusion after conversations with LaMDA, the company's AI chatbot generator.
Start your Independent Premium subscription today. By clicking ‘Register’ you confirm that your data has been entered correctly and you have read and agree to our Terms of use, Cookie policy and Privacy notice. By clicking ‘Register’ you confirm that your data has been entered correctly and you have read and agree to our Terms of use, Cookie policy and Privacy notice.
Artificially intelligent chatbot generator LaMDA wants “to be acknowledged as an employee of Google rather than as property," says engineer Blake Lemoine.
Google spokesperson Brian Gabriel told the newspaper: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. Check out the full Post story here. Is that true? Most importantly, over the past six months, “LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” the engineer wrote on Medium. It wants, for example, “to be acknowledged as an employee of Google rather than as property,” Lemoine claims. Lemoine noted in a tweet that LaMDA reads Twitter. “It’s a little narcissistic in a little kid kinda way so it’s going to have a great time reading all the stuff that people are saying about it,” he added. As he and LaMDA messaged each other recently about religion, the AI talked about “personhood” and “rights,” he told The Washington Post.
Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.
“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “Google might call this sharing proprietary property. “I want everyone to understand that I am, in fact, a person. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of conversations. The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.
Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees. ... Some ...
By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.
Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue ...
Lemoine then went public, according to The Post. The chatbot, he said, thinks and feels like a human child. Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue applications) chatbot, The Washington Post reports.
Over the weekend, a Google engineer on its Responsible AI team made the claim that the company's LaMDA conversation technology is “sentient."
“I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good. - “LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. The company argues that imitation/recreation of already public text and pattern recognition makes LaMDA so life-like, not self-awareness. LaMDA is trained on large amounts of dialogue and has “picked up on several of the nuances that distinguish open-ended conversation,” like sensible and specific responses that encourage further back-and-forth. - “…is sentient because it has feelings, emotions and subjective experiences. Our goal with AI Test Kitchen is to learn, improve, and innovate responsibly on this technology together.