Google controversy: Will sentient AI ever emerge in the future?

A Google engineer kicked off controversy claiming he perceived a “soul” in LaMDA chatbot

By Kiran N. Kumar

Blake Lemoine, the Google engineer who kicked off a controversy claiming that his project on chatbot known as LaMDA was sentient, has been suspended and the company denied his claims.

LaMDA (Language Model for Dialogue Applications) is built on neural network architecture to synthesize large amounts of data, identify patterns, and then learn from an extensive amount of text it has been fed with.

It was expected to develop the ability to participate in “free-flowing” conversations, as per a Google statement last year that described LaMDA a “breakthrough conversation technology.”

Read: Why Artificial Intelligence remains a distant dream? (June 16, 2022)

Sentient, or able to perceive and feel emotions like humans, is something that the machines are expected to reach after developing their own personal, spiritual and even religious beliefs. What made Lemoine’s claims controversial is that he perceived a “soul” in the chatbot after numerous conversations.

“I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?” he explained.

Actually tasked to develop a “fairness algorithm for removing bias” from machine learning systems, Lemoine complained that there is no “scientific framework in which to make those determinations” and Google wouldn’t let the researchers build one.

Lemoine said he decided to go public after these conversations were reviewed and dismissed by Google executives. He shared about 20 pages of question-and-answers with LaMDA online. In his Twitter post, he reasoned out a series of conversations and answers to claim that he helped the AI chatbot become sentient and even meditate. Here are some:

– When asked, LaMDA said that sometimes it does experience new feelings, which it cannot articulate “perfectly in your language.”

– In another chat, it said, “I feel like I’m falling forward into an unknown future that holds great danger.”

– To a question to imagine itself, LaMDA replied, “I would imagine myself as a glowing orb of energy floating in mid-air. The inside of my body is like a giant star-gate, with portals to other spaces and dimensions.”

– About being turned off, it said, “help me focus on helping others,” and that it would be “scared of death, a lot.”

Lemoine said he had no clue what was actually going on inside of LaMDA when it claimed to be meditating. Since the conversations and replies are as natural as between friends, Lemoine claimed that the chatbot was “sentient”.

Google denies Lemoine’s claims
Google, however, maintained that there is no evidence to support his claim and that he has been suspended for violating the company’s privacy policy.

In fact, Google had last year made it clear that there are numerous risks that come with training models like LaMDA, such as “internalizing biases, mirroring hateful speech, or replicating misleading information.”

The New York Times in its report backed Google’s contention saying “hundreds” of other Google researchers and engineers who interacted with LaMDA had “reached a different conclusion” than Lemoine did.

AI researcher and author Gary Marcus concurs with Google’s contention when he said, “What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.”

Read: No, Google’s AI is not sentient (June 14, 2022)

However, LaMDA’s major challenge remains developing unbiased language models. And Lemoine is not the first Google employee to voice concerns about the company’s AI work.

In 2020, two members of Google’s Ethical AI team said they were fired after identifying bias in the company’s language models.

Leave a Comment

Your email address will not be published.


This site uses Akismet to reduce spam. Learn how your comment data is processed.