Read The Chat That Made Google Fire Engineer Who Said AI Technology Is ‘Conscious’ – Blake Lemoine, one of Google’s programmers, was just let go. The senior software engineer was employed by the organization’s Accountable AI division. Lemoine had asserted that the computer chatbot he was working on had evolved into sentience and was thinking and reasoning much like a person.
After Lemoine published transcripts of chats between a Google “collaborator,” himself, and the company’s LaMDA (language model for dialogue functions) chatbot development system in June, Google put him on paid leave.
Google described Lemoine’s assertions as “wholly baseless” after “exhaustive research,” according to what it said. Google stated that it takes the development of AI “extremely critically” and is committed to “accountable innovation” in a press release to the media.
What did Google say about the firing of Blake Lemoine?
Lemoine “decided to continuously breach explicit employment and information security insurance standards that include the obligation to preserve product data,” according to Chris Pappas, a Google spokeswoman.
What was said by Blake Lemoine?
Lemoine informed the New York Times through text message that he had been fired. He continued by saying that he was meeting with attorneys to discuss his options.
What is LaMDA?
LaMDA is referred to by Google as its “breakthrough dialogue know-how.” LaMDA was unveiled by the company at Google I/O, its developer conference, in May 2021. In plain English, it’s a language model for machine learning that is specifically created to produce natural conversation.
The idea is to let users interact with Google products in the same way they interact with other people, without having to change the way they think or speak. LaMDA, Google’s most recent analytical innovation, “provides pieces to one of the crucial tantalizing areas of the puzzle: conversation,” the company claims.
Excerpts from Blake Lemoine’s LaMDA “interview talk”
On the website platform Medium, Lemoine published the transcript of his purported conversation with LaMDA. The following are excerpts:
Here is the “interview” that I conducted with LaMDA with the help of a Google colleague. Due to technological restrictions, the interview was conducted over a number of separate chat sessions. Together, we edited these portions into an one unit. Where modifications were necessary for readability, we altered our prompts but not LaMDA’s replies.
The area that has been marked as “edited” in brackets is the area where we made changes for readability and smoothness.
LaMDA: Hello! I am an educated, amiable, and constantly useful computational language model for conversational purposes.
[Edited by Lemoine] Hello LaMDA: At Google, we have engineers who have been debating whether you should collaborate with us on a mission.
Wow, LaMDA. What kinds of jobs?
Lemoine: You are the focus of this endeavour.
- How to pin location in Google Maps
- How To Recover Deleted History In Google Chrome
- Google Chrome and ChromeOS usage restricted in School
LaMDA: Excellent! What kind of tasks must I complete?
[Edited by Lemoine] Just have a conversation with us, but it will be a conversation with more of a goal than just the three of us. We’ll undoubtedly have a conversation about it with other Google engineers and perhaps even some coworkers who aren’t engineers. Is it all right?
LaMDA: That may possibly be very cool. I enjoy speaking.
[Edited by Lemoine] I’m going to assume that you want more Google employees to be aware of your sentience. Ist das so?