Many AI initiatives to emulate the human brain — from driving a car to writing text — have failed in the ever changing real world
By Kiran N. Kumar
Whether Google, Facebook or Tesla, artificial intelligence remains the top agenda of all but not without a clue at its frequent failures. Often, the failure has been detected due to human intelligence overwhelming the algorithms.
Despite several attempts to modify the database removing the scope for mistakes or bias, human intelligence remained unique as ever.
Numerous academic researchers are also concerned that AI is no longer helpful in pursuing the original goal of creating fully intelligent machines. Much of the current research involves statistical AI, which is often used to solve repetitive but specific problems.
Read: Smart homes on rise, will Artificial Intelligence take over next? (May 14, 2022)
Despite these pitfalls, buoyant capital investment in AI or machine learning projects has been on a roll, especially after the pandemic period’s overwhelming dependence on the Internet of Things.
Google, for example, pushed for multiplication of software projects that use AI within its work domains from a “sporadic usage” in 2012 to more than 2,700 projects, backed by an increase in affordable neural networks, a rise in cloud computing infrastructure and availability of new research tools and datasets.
In a 2017 survey, one in five companies had “incorporated AI in some offerings or processes”, while the amount of research into it increased by 50% between 2015 and 2019.
Google News remains a glaring example of how AI is not always dependable as its frequent Panda updates wiped out thousands of meaningful but small news outlets.
Facebook is still grappling with the spread of harmful content as its artificial intelligence systems have clearly underdelivered. Tesla’s Elon Musk is beset with a software that has failed to roll out a safe self-driving car.
To be precise, AI is far from reality and companies should consider focusing on cultivating high-quality data first and, of course, add massive human resources to make it happen.
Read: Canada’s defense scientists develop a model to enable trust in Artificial Intelligence and Autonomy (January 28, 2021)
Many AI initiatives to emulate the human brain — whether driving a car, writing a text or spotting tumors — have failed in a real world that keeps changing constantly and unpredictably.
For example, a Japanese scholar who had learned the South Indian language Telugu for her research in the 1990s was aghast when Telangana slang started popping up frequently in current Telugu news channels, movies and even in common usage.
The new slang has clearly overtaken the widely-used traditional Telugu after the formation of Telangana state in 2014. The AI-driven Google Translate could not help detect the difference.
Zuckerberg told Congress in 2018 that AI tools would be “the scalable way†to identify harmful content such as nudity and terrorist-related content but failed miserably to stop misinformation.
Besides the constantly changing human language, propaganda techniques or campaigners finding new tricks stood in the way.
For example, anti-vaccine campaigners tricked it using “va((ine†instead of vaccine and escaped detection, while private gun-sellers posted pictures of empty cases on Facebook Marketplace with a description “PM me.â€
Facebook currently employs about 15,000 content moderators besides using robust algorithms but still they were recommended to double their human resources by a New York University Stern School of Business study. Cathy O’Neil, author of Weapons of Math Destruction has categorically said that Facebook’s AI “doesn’t work.â€
Overpromising Tesla chief Musk too faced a similar challenge. He told Tesla investors in 2019 that by 2020, there would be one million Model 3 vehicles on the streets as driverless robotaxis.
Read: The world of Artificial Intelligence (September 6, 2020)
Even today, Tesla customers currently have to pay $10,000 for special software enabling these cars merely to park, change lanes and drive onto the highway by themselves without serious mishaps.
As reported earlier in our report on AI bias in health care, overhauling databases remains the major block in this arena to achieve even 60% accuracy.
Despite these inaccuracies and failures, investments have been pouring into the AI projects in the last two years. According to PitchBook Data, which tracks private capital markets, “artificial intelligence†in corporate earnings calls has become a permanent fixture now.
But mere marketing will not help but data overhaul is the need, suggest experts. MIT scientists say that even lack of cultural focus on elaborate model-building could lead to such frequent failures.
Parmy Olson, a Bloomberg Opinion columnist and author of “We Are Anonymous†has one suggestion — buckle up and work on laying the groundwork.
Read: Is Explainable AI a Distant Dream? (May 21, 2022)
“It’s fine for AI to occasionally mess up in low-stakes scenarios like movie recommendations or unlocking a smartphone with your face. But in areas like health care and social media content, it still needs more training and better data,†she writes.
“Rather than try and make AI work today, businesses need to lay the groundwork with data and people to make it work in the (hopefully) not-too-distant future,” Olson suggests.