With LaMDA project back on track, Google has to focus on removing bias of all kinds — gender, racial and ethical
By Kiran N. Kumar
Finally, Google has fired its controversial software engineer, Blake Lemoine, who claimed that the company’s Language Model for Dialogue Applications (LaMDA) conversation technology can behave like a human and thus, has become ‘sentient.’
The pertinent question is whether LaMDA ever reached the status of ‘sentient’?
Built on neural network architecture by Google research team to synthesize large amounts of data, identify patterns, and then learn from an extensive amount of text it has been fed with, LaMDA typically churns out answers from the sourced data.
Often biases crop up though it has the ability to participate in “free-flowing” conversations, which Google described last year as a “breakthrough conversation technology.”
But ‘sentient’ status entails perceiving and feeling emotions like humans, not just expressing them in human words. In addition, these machines are expected to develop their own personal, spiritual and even religious beliefs to claim they are indeed, ‘sentient.’
Lemoine, who was roped in to remove bias in LaMDA has claimed too early that he perceived a “soul” in the chatbot. “I’m a priest. When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can’t put souls?” he explained.
For scientific and empirical studies, the claims and argument proved futile as Google’s hundreds of peer-engineers who worked on LaMDA refuted his claims. Google has maintained that there is no evidence to support his claim and that he violated the product privacy.
Confirming Lemoine’s dismissal, Google said, “LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them extensively.”
Google reiterated that its reviewers found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.
But it is “regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” it said.
Similar view was echoed by AI researcher and author Gary Marcus who summed it up succinctly: “What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.”
Now that the LaMDA project is back to square one, Google has a bigger task ahead than merely engaging “priests” or linguists. It has to focus on removing bias of all kinds — gender, racial and ethical.
For instance, an application named Intelligent Trial 1.0, being used in China has reduced judges’ workload significantly by helping them sift through material and producing electronic court files and case material.
But the emphasis is still on helping, not replacing judges or lawyers. “The application of artificial intelligence in the judicial realm can provide judges with splendid resources, but it can’t take the place of the judges’ expertise,” said Zhou Qiang, the head of the Supreme People’s Court, who advocates smart systems.
Every AI application can yield desired results if it’s built on rational structure rather than an ambitious agenda.