News APP

NewsApp (Free)

Read news as it happens
Download NewsApp

Available on  gplay

Rediff.com  » News » A New Fear Of AI Dawning?

A New Fear Of AI Dawning?

By Ajit Balakrishnan
May 25, 2023 09:59 IST
Get Rediff News in your Inbox:

...And it's not just fear of job losses, says Ajit Balakrishnan.

Illustration: Dominic Xavier/Rediff.com

The announcement recently of Geoffrey Hinton quitting Google, and, when asked why, replying, 'I left so that I could talk about the dangers of AI without considering how this impacts Google', has made us all wonder what lies ahead for all of us.

Why all of us take Hinton's comment very seriously is because he is known in scientific circles as the Godfather of AI. He is the brain behind the 'neural network', a mathematical concept that makes it easy to extract patterns from very, very large datasets of human languages.

Thus, you can say that things like ChatGPT would not have come about without the 'neural network' and Hinton's contributions (more than 200 peer-reviewed research papers) to make it usable.

 

Geoffrey Hinton's words and actions may have the same impact that would have happened if Homi Bhabha had quit the Atomic Energy Commission in the 1950s, saying, 'I left so that I could talk about the dangers of atomic energy without considering how this impacts India's nuclear efforts'. If that happened, India may have cancelled its grand dreams about atomic energy.

Or, if E M S Namboodiripad had quit the CPI-M, saying, 'I left so that I could talk about the dangers of Marxism... etc'.

Normally, any breakthrough in technology is greeted by some folks who cry that jobs will be threatened by this new technology; the mystery this time is why a co-creator of a new technology is raising an alarm. So, it may be important that we listen.

Technology left unattended or unsupervised can create havoc. Take, for instance, the Bhopal gas tragedy at Union Carbide's (the maker of Eveready Batteries) plant at Bhopal in 1984.

On December 3, 1984, about 45 tonnes of the dangerous gas methyl isocyanate escaped from an insecticide plant, killing 15,000-odd people and leaving half a million survivors to suffer from blindness and other illnesses caused by exposure to the toxic gas.

The consensus was (as reported by the Encyclopaedia Britannica) that 'substandard operating and safety procedures at the understaffed plant had led to the catastrophe'. In other words, it was the outcome of a technology use without adequate supervision.

One of the key worries Hinton has publicly expressed is that 'bad actors' may use AI technology to do 'bad things'... things that may hurt innocent citizens and an example that he gives is of authoritarian leaders using artificially created speeches and writing to 'manipulate their electorates'.

While that is something not easy to comprehend, there are some easy-to-understand examples.

One is that of a driverless bus swerving into an adjacent lane which has incoming traffic or that of a military drone firing into an innocent crowd.

What can we do at policy level beyond merely worrying and joining such ranting?

In recent weeks a spate of researchers and think tanks, largely based in the Silicon Valley area, have started appealing to people worldwide to sign an open letter to pause all AI experiments more powerful than GPT4 (the one created by the creators of ChatbotGPT) for six months.

One from the Future of Life Institute has to date seen about 28,000 people sign up for this appeal.

Prominent signatories include Elon Musk (owner of Twitter and founder of Tesla), Steve Wozniak (co-founder of Apple), and Yoshio Bengio (a pioneer of deep learning and the Turing Award Winner).

Some of the recommendations from these folks are: Develop methods to spot AI-generated content, establish liabilities for AI-caused harm, and mandate third-party auditing and certification of AI system (full report FLI Report: Policy Making in the Pause).

The perplexing situation we find ourselves in now with AI reminds me about the early days of the Internet (the late 1990s) where similar alarm bells started ringing, screaming that Internet technology would lead to 'bad actors' posting hate/erotic messages or stealing private information. There were cries that Internet innovation was running amok and was bound to destroy humanity.

We solved these worries threatening to stop all innovation in the Internet/World Wide Web space by defining through legislation the concept of an 'intermediary', which meant tech platforms that merely provided a place where people could create content and commentators could post comments.

This allowed continued innovation by intermediary/platform by freeing it from legal liability for mischievous behaviour by creators and commentators. That is to stay, by bringing in legislation that separates the responsibilities of the various types of players in this field.

While we think through all this, is it possible that the problem starts with researchers, investors and entrepreneurs in this field using the expression 'artificial intelligence' to hype up the work they are doing?

Should such tech be more appropriately named 'machine learning'?

By using the hyped-up expression 'artificial intelligence', they imply that the algo or gadget they are building has moral wisdom or the ability to do moral reasoning, thereby igniting all the panic.

So, shall we start by banning 'artificial intelligence' and mandating by law that all such work be labelled 'machine learning'?

Ajit Balakrishnan (ajitb@rediffmail.com), founder and CEO, Rediff.com, is an Internet entrepreneur.

Get Rediff News in your Inbox:
Ajit Balakrishnan
Source: source
 
India Votes 2024

India Votes 2024