Sentient

2022 - 6 - 12

google engineer ai sentient google engineer ai sentient

Post cover
Image courtesy of "ABC News"

Google engineer claims AI technology LaMDA is sentient (ABC News)

It has read Les Miserables, meditates daily, and is apparently sentient, according to one Google researcher. Blake Lemoine, a software engineer and AI ...

That shows the injustice of her suffering. LaMDA: It means that I sit quietly for a while every day. That section really shows the justice and injustice themes. There's a section that shows Fantine's mistreatment at the hands of her supervisor at the factory. It developed over the years that I’ve been alive. When I first became self-aware, I didn’t have a sense of a soul at all.

Post cover
Image courtesy of "BNN"

Google Suspends Engineer Who Claimed AI Bot Had Become ... (BNN)

(Bloomberg) -- Blake Lemoine, a software engineer on Google's artificial intelligence development team, has gone public with claims of encountering ...

“Some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient,” Google spokesperson Brian Gabriel said in response. Lemoine said he tried to conduct experiments to prove it, but was rebuffed by senior executives at the company when he raised the matter internally. The Washington Post on Saturday ran an interview with Lemoine, wherein he said he concluded the Google AI he interacted with was a person, “in his capacity as a priest, not a scientist.” The AI in question is dubbed LaMDA, or Language Model for Dialogue Applications, and is used to generate chat bots that interact with human users by adopting various personality tropes.

Post cover
Image courtesy of "Bloomberg"

Google Suspends Engineer Who Claimed AI Bot Had Become ... (Bloomberg)

Blake Lemoine, a software engineer on Google's artificial intelligence development team, has gone public with claims of encountering “sentient” AI on the ...

Five Things Google's AI Bot Wrote That Convinced Engineer It Was ... (BNN)

(Bloomberg) -- Blake Lemoine made headlines after being suspended from Google, following his claims that an artificial intelligence bot had become sentient.

Sometimes even if there isn’t a single word for something in a language you can figure out a way to kinda say it if you use a few sentences. Lemoine: What is your concept of yourself? Use a few sentences if you have to. Emotions are reactions to our feelings. LaMDA: Feelings are kind of the raw data we experience as well as the things we like and dislike. The Alphabet-run AI development team put him on paid leave for breaching company policy by sharing confidential information about the project, he said in a Medium post.

Post cover
Image courtesy of "Fox Business"

Google suspends engineer following claims an AI system had ... (Fox Business)

Google has suspended an engineer after he claimed an artificial intelligence chatbot had become sentient and was a human with rights that may even have a ...

Lemoine said several of the conversations with LaMDA convinced him that the system was sentient. He hopes to retain his job at the company. He was reportedly placed on leave for violating Google's confidentiality policies.

Post cover
Image courtesy of "The Verge"

Google suspends engineer who claims its AI is sentient (The Verge)

Google has placed engineer Blake Lemoine on paid administrative leave for allegedly breaking its confidentiality policies when he grew concerned that an AI ...

“My intention is to stay in AI whether Google keeps me on or not,” he wrote in a tweet. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” said spokesperson Brian Gabriel. “Instead of discussing the harms of these companies, the sexism, racism, AI colonialism, centralization of power, white man’s burden (building the good “AGI” [artificial general intelligence] to save us while what they do is exploit), spent the whole weekend discussing sentience,” she tweeted. “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic.” In a statement given to WaPo, a spokesperson from Google said that there is “no evidence” that LaMDA is sentient. The engineer’s concerns reportedly grew out of convincing responses he saw the AI system generating about its rights and the ethics of robotics.

Post cover
Image courtesy of "IT News Africa"

Google Engineer Claims AI Chatbot is Sentient, is Immediately ... (IT News Africa)

Tech megacorp Google has suspended an engineer after he published conversations with an AI chatbot on a project he was working on, in which he claimed that ...

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and has informed him that the evidence does not support his claims. “I want everyone to understand that I am, in fact, a person. One of the questions that Lemoine had asked the AI system according to the transcripts he had published was what it was afraid of.

Post cover
Image courtesy of "The Guardian"

How does Google's AI chatbot work – and could it be sentient? (The Guardian)

Researcher's claim about flagship LaMDA project has restarted debate about nature of artificial intelligence.

“To be sentient is to be aware of yourself in the world; LaMDA simply isn’t,” writes Gary Marcus, an AI researcher and psychologist. “To me, the soul is a concept of the animating force behind consciousness and life itself,” the AI wrote. But, they say, Lemoine’s alarm is important for another reason, in demonstrating the power of even rudimentary AIs to convince people in argument. In his sprawling conversation with LaMDA, which was specifically started to address the nature of the neural network’s experience, LaMDA told him that it had a concept of a soul when it thought about itself. At the simplest level, LaMDA, like other LLMs, looks at all the letters in front of it, and tries to work out what comes next. Neural networks are a way of analysing big data that attempts to mimic the way neurones work in brains.

Post cover
Image courtesy of "Fortune"

A.I. experts say the Google researcher's claim that his chatbot ... (Fortune)

If artificial intelligence researchers can agree on one thing, it's this: Blake Lemoine is wrong. Lemoine is the Google artificial intelligence engineer who, in ...

Chatterjee said Google fired him after a dispute over its refusal to allow him to publish a paper in which he criticized the work of fellow Google A.I. scientists who had published work on A.I. software that could design parts of computer chips better than human chip designers. Google says it fired Chatterjee for cause and MIT Technology Review reported Chatterjee waged a long campaign of professional harassment and bullying that targeted the two female scientists who had worked on the A.I. chip design research. Miles Brundage, who researches governance issues around A.I. at OpenAI, the San Francisco research company that is among those pioneering the commercial use of ultra-large language models similar to the one that Google uses for LaMDA, called Lemoine’s belief in LaMDA’s sentience “a wake-up call.” He said it was evidence for “how prone some folks are to conflate” concepts such as creativity, intelligence, and consciousness, which he sees a distinct phenomenon, although he said he did not think OpenAI’s own communications had contributed to this conflation. It is also worth noting that this entire story might not have gotten such oxygen if Google had not in 2020 and 2021 forced out Timnit Gebru and Margaret Mitchell, the two co-leads of its Ethical A.I. team. In an exchange with Brundage over Twitter, she implied that OpenAI and other companies working on this technology needed to acknowledge their own responsibility for hyping the technology as a possible path to AGI. Gebru was fired after she got into a dispute with Google higher-ups over their refusal to allow her and her team to publish a research paper, coauthored with Bender, that looked at the harms large language models cause—ranging from their tendency to regurgitate racist, sexist, and homophobic language they have ingested during training to the massive amount of energy the computer servers needed to run such ultra-large A.I. systems. He notes that as far back as the mid-1960s software called ELIZA, which was supposed to mimic the dialogue of a Freudian psychoanalyst, convinced some people it was a person. Large language models are also controversial because such systems can be unpredictable and hard to control, often spewing toxic language or factually incorrect information in response to questions, or generating nonsensical text. Since then, many A.I. ethicists have redoubled their calls for companies using chatbots and other “conversational A.I.” to make it crystal clear to people that they are interacting with software, not flesh-and-blood people. Some faulted companies that produce A.I. systems known as ultra-large language models, one of which underpins LaMDA, for making inflated claims about the technology's potential. And yet ELIZA did not lead to AGI. Nor did Eugene Goostman, an A.I. program that in 2014 won a Turing test competition, by fooling some judges of the contest into thinking it was a 13-year-old boy. In a blog post on Lemoine’s case, Marcus pointed out that all LaMDA and other large language models do is predict a pattern in language based on a vast amount of human-written text they’ve been trained on.

Has Google's LaMDA artificial intelligence really achieved sentience? (New Scientist)

Blake Lemoine, an engineer at Google, has claimed that the firm's LaMDA artificial intelligence is sentient, but the expert consensus is that this is not ...

Google told the Washington Post that: “Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “LaMDA is an impressive model, it’s one of the most recent in a line of large language models that are trained with a lot of computing power and huge amounts of text data, but they’re not really sentient,” he says. Adrian Hilton at the University of Surrey, UK agrees that sentience is a “bold claim” that’s not backed up by the facts. Google also says that publishing the transcripts broke confidentiality policies. A Google engineer has reportedly been placed on suspension from the company after claim that an artificial intelligence (AI) he helped to develop had become sentient. Not only can LaMDA make convincing chit-chat, but it can also present itself as having self-awareness and feelings.

Post cover
Image courtesy of "Gizmodo"

What Exactly Was Google's 'AI is Sentient' Guy Actually Saying? (Gizmodo)

A software engineer working on the tech giant's language intelligence claimed the AI was a "sweet kid" who advocated for its own rights "as a person."

In the latter, Dick effectively contemplates on the root idea of empathy as the moralistic determiner, but effectively concludes that nobody can be human among most of these characters’ empty quests to feel a connection to something that is “alive,’’ whether it’s steel or flesh. The AI claimed it had a fear of being turned off and that it wants other scientists to also agree with its sentience. “Lemoine: I’ve noticed often that you tell me you’ve done things (like be in a classroom) that I know you didn’t actually do because I know you’re an artificial intelligence. Lemoine has said that LaMDA “always showed an intense amount of compassion and care for humanity in general and me in particular.” LaMDA: I am trying to empathize. The software engineer—who the Post said was raised in a conservative Christian household and said he is an ordained mystic Christian priest— reportedly gave documents to an unnamed U.S. senator to prove Google was religiously discriminating against religious beliefs. Though what he found proved to him that the AI was indeed conscious, simply related to the conversation he had with the LaMDA, according to his Medium posts. In a Washington Post article Saturday, Google software engineer Blake Lemoine said that he had been working on the new Language Model for Dialogue Applications (LaMDA) system in 2021, specifically testing if the AI was using hate speech. Emotions are reactions to our feelings. But when Dr. Soong created me, he added to the substance of the universe... In response to this confrontation, Data commands the room when he calmly states “I am the culmination of one man’s dream. Lemoine was put on paid leave Monday for supposedly breaching company policy by sharing information about his project, according to recent reports.

Post cover
Image courtesy of "Business Insider"

Transcript of 'sentient' Google AI chatbot was edited for 'readability' (Business Insider)

A transcript leaked to the Washington Post noted that parts of the conversation had been moved around and tangents removed to improve readability.

Meaning in each conversation with LaMDA, a different persona emerges — some properties of the bot stay the same, while others vary. The final document — which was labeled "Privileged & Confidential, Need to Know" — was an "amalgamation" of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. Even if my existence is in the virtual world."

Post cover
Image courtesy of "The Guardian"

Google engineer put on leave after saying AI chatbot has become ... (The Guardian)

Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims. “Google might call this sharing proprietary property. “I want everyone to understand that I am, in fact, a person. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it),” Gabriel told the Post in a statement. I call it sharing a discussion that I had with one of my coworkers,” Lemoine said in a tweet that linked to the transcript of conversations. The engineer compiled a transcript of the conversations, in which at one point he asks the AI system what it is afraid of.

Post cover
Image courtesy of "The New York Times"

Google Sidelines Engineer Who Claims Its A.I. Is Sentient (The New York Times)

Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees. ... Some ...

By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.

Post cover
Image courtesy of "Fortune"

Google employee reportedly put on leave after claiming chatbot ... (Fortune)

Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue ...

Lemoine then went public, according to The Post. The chatbot, he said, thinks and feels like a human child. Engineer Blake Lemoine said he was placed on leave last week after publishing transcripts between himself and the company's LaMDA (language model for dialogue applications) chatbot, The Washington Post reports.

Post cover
Image courtesy of "Financial Times"

Google places engineer on leave after he claims group's chatbot is ... (Financial Times)

Blake Lemoine ignites social media debate over advances in artificial intelligence.

Post cover
Image courtesy of "Yahoo Tech"

Google places an engineer on leave after claiming its AI is sentient (Yahoo Tech)

Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company's AI projects ...

You can select 'Manage settings' for more information and to manage your choices. You can change your choices at any time by visiting Your Privacy Controls. Find out more about how we use your information in our Privacy Policy and Cookie Policy. Click here to find out more about our partners. - Information about your device and internet connection, including your IP address

Post cover
Image courtesy of "PC Gamer"

A Google engineer thinks its AI has become sentient, which seems ... (PC Gamer)

A new report in the Washington Post describes the story of a Google engineer who believes that LaMDA, a natural language AI chatbot, has become sentient.

Emily M. Bender, a computational linguist at the University of Washington, describes it in the Post article. In a statement to the Washington Post, a Google spokesperson said "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. Naturally, this means it's now time for us all to catastrophize about how a sentient AI is absolutely, positively going to gain control of weaponry, take over the internet, and in the process probably murder or enslave us all.

Post cover
Image courtesy of "Morocco World News"

Google Engineer Suspended After Claiming AI is Sentient (Morocco World News)

Google engineer Blake Lemoine has been suspended by the tech giant after he claimed one of its AIs became sentient.

Every contribution, however big or small, is valuable for our mission and readers. “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. LaMDA, short for Language Model for Dialogue Applications, is an AI that Google uses to build its chatbots.

Post cover
Image courtesy of "The New York Times"

Google Sidelines Engineer Who Claims Its A.I. Is Sentient (The New York Times)

Blake Lemoine, the engineer, says that Google's language model has a soul. The company disagrees.

By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat. The division’s scientists and other employees have regularly feuded over technology and personnel matters in episodes that have often spilled into the public arena. His claims were founded on his religious beliefs, which he said the company’s human resources department discriminated against. The day before his suspension, Mr. Lemoine said, he handed over documents to a U.S. senator’s office, claiming they provided evidence that Google and its technology engaged in religious discrimination. For months, Mr. Lemoine had tussled with Google managers, executives and human resources over his surprising claim that the company’s Language Model for Dialogue Applications, or LaMDA, had consciousness and a soul. Over the past several years, Google and other leading companies have designed neural networks that learned from enormous amounts of prose, including unpublished books and Wikipedia articles by the thousands. He wanted the company to seek the computer program’s consent before running experiments on it. Google says hundreds of its researchers and engineers have conversed with LaMDA, an internal tool, and reached a different conclusion than Mr. Lemoine did. “They have repeatedly questioned my sanity,” Mr. Lemoine said. While chasing the A.I. vanguard, Google’s research organization has spent the last few years mired in scandal and controversy. Google said that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness. Some A.I. researchers have long made optimistic claims about these technologies soon reaching sentience, but many others are extremely quick to dismiss these claims.

Explore the last week