Technology may one day become the judge of good and bad human behavior and assign appropriate punishments
By Kiran N. Kumar
China has long been experimenting with a robot called Xiaofa who stands in Beijing No 1 Intermediate People’s Court, offering legal advice and answering more than 40,000 queries instantly.
With 100 such robots, China is actively pursuing a transition to smart justice to retrieve case histories and past verdicts, reducing the officials required to work in courts.
These courts are reportedly using artificial intelligence to scan private messages or comments on social media to be used as evidence in court, besides the use of facial recognition technology to identify and convict offenders.
Read: Will robots replace doctors in surgery rooms? (July 19, 2022)
While the use of AI is likely to sway the legal systems around the world, the pertinent question surfaces — whether these AI-driven robots can potentially replace a judge.
Meng Jianzhu, former official at the Chinese Communist Party, once exuded confidence that the Chinese government would start using AI soon to predict where crime and disorder may occur.
“Artificial intelligence can complete tasks with a precision and speed unmatchable by humans, and will drastically improve the predictability, accuracy and efficiency of social management,†Meng said.
As for China, the AI application called Intelligent Trial 1.0 is already there to reduce judges’ workloads but not replace them.
“The application of artificial intelligence in the judicial realm can provide judges with splendid resources, but it can’t take the place of the judges’ expertise,†insisted Zhou Qiang, the head of the Supreme People’s Court, who advocated the introduction of the smart systems way back in 2017.
Eliminating bias?
It is established beyond doubt that AI systems can exhibit biases that stem from their programming and data sources. For instance, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group.
Systemic biases result from institutions operating in ways that disadvantage certain groups of society based on their race, color and neighborhood.
Read: Future Shock: Automatic restaurants and robotic kitchens await you (May 11, 2022)
To counter the harmful effects of bias in AI systems, researchers at the National Institute of Standards and Technology (NIST) recommend looking for the source of these biases — beyond the data to the broader societal factors that influence how technology is developed.
In a revised NIST report, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence” (NIST Special Publication 1270), principal investigator for AI bias at NIST Reva Schwartz articulates how bias manifests itself not only in AI algorithms and the data used to train them, but also in the societal context in which AI systems are used.
“Context is everything,†said Schwartz, one of the NIST report’s authors. “If we are to develop trustworthy AI systems, we need to consider all the factors that can chip away at the public’s trust in AI.â€
“Many of these factors go beyond the technology itself to the impacts of the technology, and the comments we received from a wide range of people and organizations emphasized this point.â€
When these human biases percolate down to computational and AI systems while filling the data, they can form a pernicious mixture, write NIST report authors.
To address these issues, they recommend a “socio-technical†approach to mitigating bias in AI. This approach advocates involving stakeholders and identifying measurement techniques.
“It’s important to bring in experts from various fields — not just engineering — and to listen to other organizations and communities about the impact of AI,†Schwartz said.
Read: Flippy, meet Chippy, Chipotle’s new tortilla chip-cooking robot arm (March 16, 2022)
Bias in court systems
In view of AI’s use not merely in sifting through data but to develop cognitive skills and learning from past events and cases, the future scenario for courts is rather intriguingly ominous — whether AI will one day make better decisions than humans.
Since human decisions are often susceptible to prejudice and bias, though many times unconsciously, algorithms can overcome such factors that do not legally bear on individual cases, such as gender and race, argue a section of technology developers.
In case of considerations for judges whether to grant bail and the likelihood of repetitive offense, algorithms can depend on evidence-based analysis of the risks and provide solutions instantly. In this scenario, the subjective decision-making of individual judges will be replaced by AI-driven solutions.
However, some observers warn that AIs may learn and mimic bias from their human inventors or the data they have been trained with.
While the NIST team recommends corrective methods at the source of data level, the legal community pitches for a review mechanism on all AI-made decisions, probably at the level of an appeals judge.
Otherwise, AI could help solve crimes quickly without the intervention of a judge in cases pertaining to small crimes with limited need for evidence.
For example, Visual Analytics for Sense-making in Criminal Intelligence Analysis (VALCRI) is an application that helps investigators to find related or relevant information in several criminal databases and present them in a visual and readable format.
Read: Can AI replace a judge in the courtroom? (October 1, 2021)
Funded by the European Commission and monitored by Professor William Wong at Middlesex University, VALCRI carries out labor-intensive crime analysis that humans might miss to decide whether a case requires further investigation.
So, the day is not far when technology will become the judge of good and bad human behavior and assign appropriate punishments. It depends how different governments and judiciaries will choose to monitor the robots and their use.