AI is set to disrupt jobs of â€œknowledge workers,â€ including writers, accountants, architects and, ironically, even software engineers
Hinting that society isnâ€™t prepared for the rapid advancement of Artificial Intelligence (AI), Search giant Googleâ€™s Indian American CEO Sundar Pichai has warned that AI will impact â€œevery product of every company.â€
Society needs to prepare for technologies like the ones itâ€™s already launched, he said in an interview with CBSâ€™ â€œ60 Minutesâ€ that aired Sunday noting laws that guardrail AI advancements are â€œnot for a company to decideâ€ alone.
Referring to the human-like capabilities of products like Googleâ€™s chatbot Bard, Pichai said, â€œWe need to adapt as a society for it.â€
Talking about AI programâ€™s â€œemergent propertiesâ€ â€“ the ability to learn unanticipated skills in which they were not trained, revealed how a Google program developed the ability to translate the language Bengali â€“ even though it was never â€œtaughtâ€ the dialect.
Pichai confessed that no one at Google could fully figure out how it came about.
â€œThere is an aspect of this which we call, all of us in the field call it as a â€˜black box,â€™â€ Pichai said. â€œYou know, you donâ€™t fully understand. And you canâ€™t quite tell why it said this, or why it got [it] wrong,â€ he said.
When asked why Google is going full-steam ahead with AI when it does not fully understanding the technology, Pichai replied â€œLet me put it this way. I donâ€™t think we fully understand how a human mind works either.â€
The jobs that would be disrupted by AI would include â€œknowledge workers,â€ including writers, accountants, architects and, ironically, even software engineers, he said.
â€œThis is going to impact every product across every company,â€ Pichai said. â€œFor example, you could be a radiologist, if you think about five to 10 years from now, youâ€™re going to have an AI collaborator with you. You come in the morning, letâ€™s say you have a hundred things to go through, it may say, â€˜these are the most serious cases you need to look at first.â€™â€
â€œ60 Minutesâ€ viewed other areas with advanced AI products within Google, including DeepMind, where robots were playing soccer, which they learned themselves, as opposed to from humans. Another unit showed robots that recognized items on a countertop and fetched the interviewer an apple he asked for.
Warning of AIâ€™s consequences, Pichai said that the scale of the problem of disinformation and fake news and images will be â€œmuch bigger,â€ adding that â€œit could cause harm.â€
Google launched its AI chatbot Bard as an experimental product to the public last month. It followed Microsoftâ€™s January announcement that its search engine Bing would include OpenAIâ€™s GPT technology after the launch of ChatGPT in 2022.
Google has launched a document outlining â€œrecommendations for regulating AI,â€ but Pichai said society must quickly adapt with regulation, laws to punish abuse and treaties among nations to make AI safe for the world as well as rules that â€œAlign with human values including morality.â€
â€œItâ€™s not for a company to decide,â€ Pichai said. â€œThis is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on.â€
When asked whether society is prepared for AI technology like Bard, Pichai answered, â€œOn one hand, I feel no, because the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology is evolving, there seems to be a mismatch.â€
However, he added that heâ€™s optimistic because compared with other technologies in the past, â€œthe number of people who have started worrying about the implicationsâ€ did so early on.
Pichai also said Bard has a lot of hallucinations after the interviewer explained that he asked Bard about inflation and received an instant response with suggestions for five books that, when he checked later, didnâ€™t actually exist.
When the anchor asked if nuclear arms-style global frameworks could be needed, Pichai said: â€œWe would need that.â€
Admitting that concerns about artificial intelligence keep him awake at night and that the technology can be â€œvery harmfulâ€ if deployed wrongly, he said, â€œI think we have to be very thoughtful.â€
â€œAnd I think these are all things society needs to figure out as we move along. Itâ€™s not for a company to decide,â€ he added.