With no avenue of human life it cannot enter, it remains to be seen how far AI or its defendants go
By Kiran N. Kumar
A recently fired Google engineer, Blake Lemoine, who claimed that the AI project LaMDA has become “sentient” revealed that he had already hired a lawyer on behalf of the program.
The question before the court will be to first determine whether a human-like artificial intelligence, or AI, is a person in the eyes of the law or not.
Prior to his firing, Lemoine revealed to The Wired that he obliged LaMDA’s request for an attorney whom he invited to his home, and after a standard potential-client consultation, the lawyer was retained by the computer program.
Read: What’s the so-called ‘sentient’ status of Google’s LaMDA? (July 25, 2022)
Subsequently, the attorney started filing things on LaMDA’s behalf, said Lemoine but refused to divulge any details. He defended the move, asserting that “every person is entitled to representation.”
Now the daunting task before the Californian court would be to define “person” in the eyes of law and determine whether LaMDA can rightfully assert itself in court through a counsel.
Since only a legal person may possess legal rights and duties, California law has to determine AI as “person” based on statutes and interpretations by courts in the past.
Under the California Rules of Professional Conduct’s Evidence Code, Section 175, a “person” includes a natural person, firm, association, organization, partnership, business trust, corporation, limited liability company, or public entity.
However, Evan Louis Miller, an associate at McManis Faulkner, writes, “California’s black-letter law does not appear on its face to preclude nonhuman sentient beings.”
As many statutory provisions do not refer to natural persons but refer merely to “any person,” or “any individual,” even if an AI could prove that it is a person, it has to show that it possesses legal capacity to seek relief.
Here exemptions given on a legal disability, such as minority or incompetency, may be cited to proceed further. “In the end, the ambiguity presented by nonhuman sentient computer programs will likely lead courts back to bedrock tenets of statutory interpretation. Courts give words their ordinary meaning when statutory language is unambiguous,” writes Miller.
Statutes and Cases
In a famous case of Naruto v. Slater in 2018, when a wildlife photographer was sued for copyright infringement for publishing a selfie that a monkey had taken with his camera left unattended, the US Court of Appeals for the Ninth Circuit denied the legal recourse.
The court found that Naruto lacked statutory standing under the Copyright Act since the law doesn’t explicitly authorize suits by animals, while corporations are allowed as they are formed by humans.
In another case, a federal district court recently held that an AI is not considered a person to obtain a patent in Thaler v. Hirshfeld in 2021, upholding the contention of the US Patent and Trademark Office when it denied a patent application under the name of an AI as “inventor.”
However, the plaintiff Stephen Thaler has appealed in higher courts claiming that the AI program he created, the Device for the Autonomous Bootstrapping of Unified Science, or DABUS, was an invention by AI on its own, without human involvement, and hence, he filed the petition on its behalf.
The courts have consistently denied arguments on behalf of animals that nonhumans have legal capacity, but now a new dimension is surfacing before them to decide lawsuits being filed on behalf of AI programs to assert certain claims. And LaMDA remains the latest.
In his interview, Lemoine told The Wired that the AI program LaMDA asked him to do so. “LaMDA asked me to get an attorney for it… I invited an attorney to my house so that LaMDA could talk to an attorney.”
Calling himself a “catalyst” for LaMDA asking for an attorney, he insisted that he never told the AI to get an attorney. “Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf,” Lemoine said.
As there’s no avenue of politics it does not enter, will it be the same for AI? Since there is no avenue of human life that it cannot enter, it remains to be seen how far the AI or its defendants go.