Rediff.com« Back to articlePrint this article

Can ChatGPT Cause A Global Catastrophe?

February 03, 2023 14:54 IST

Suppose NLP is weaponised to influence policymakers to go the wrong way on climate change, or launch military attacks on neighbouring countries, warns Devangshu Datta.

Illustration: Uttam Ghosh/Rediff.com
 

ChatGPT, an AI created by OpenAI, has made huge waves. There are claims it could supersede conventional search engines.

Unlike a search engine, ChatGPT doesn't simply list links when a search term (or a natural language question) is entered.

Instead, it produces a verbal precis of what it considers relevant information from what it considers pertinent links. The quality of that can vary a lot, of course.

One reason why this is popular is familiarity: The ChatGPT mode of supplying information is akin to that of a schoolteacher, albeit one who isn't very discriminating.

A second reason is simply that this provides a filter for search results, saving searchers from the tedium of manually reading links.

Natural language processing (NLP), as deployed here, is at the cutting edge of AI research. The uses of good NLP are endless, and it is among the more challenging areas of research.

Even the brightest and best humans speak and write with some lack of coherence.

The information in human speech and writing is scattered and unstructured, complicated by context, and often larded with humour, irony and sarcasm.

Being able to make sense and extract relevant information and, above all, to speak and write in the same style as humans is very, very hard.

To do it in a narrow way like chatbots do is hard enough.

For example, a realtor or an automobile agency may use a chatbot to discover the needs of clients. This is narrow since the topics will be, say, one bedroom or two; manual transmission or auto; preferred budget range and so on.

Doing NLP across a broad spectrum of subjects and doing it well enough to make humans believe they are interacting with humans is the infamous Turing Test.

The concept of NLP was originally developed for psychotherapy. It is now considered to be of dubious use to fix mental health issues.

But NLP has other endless possibilities, as it gets better.

Using NLP to improve voice commands and responses in critical situations like giving instructions to a car is something a lot of autonomous vehicle research is focussed on.

Another benign use may be the ability to diagnose physical health, by asking questions and parsing natural language responses to figure out symptoms the way human physicians do.

NLP could give overworked health workers a big boost as a first filter.

Other 'secretarial' duties or use-cases, such as writing up the minutes of corporate meetings, or churning out comprehensible technical manuals are easy to think up.

Using NLP to run through millions of social media and mainstream media statements to understand attitude is a more nuanced use case.

For example, let's say there's a proposed change to tax laws, or a proposal to lift prohibition in a given state. Policymakers could gauge the mood by using NLP to analyse high volumes of commentary.

Unfortunately, less benign uses could also arise if NLP extends its ability to 'understand' social media interaction to manipulate opinion.

NLP could be a good tool for influencing social media. It could also be a phishing and social engineering tool.

Another issue is bias.

Language models like all AI have to be trained on data. There are multiple ways of training, but all of the methods involve masses of data.

In NLP, that data is generally drawn from the Internet and other publicly available sources for verbal content.

Unfortunately, the Internet (like the real world) contains much wrong information, as well as fake news and biases.

If NLP is trained using racist content, for example, it will amplify opinions that people of a certain colour, or religion, are superior to others.

If it is trained on content asserting women are bad at a certain task, it will also assert this is true. Similar problems arise with subjects like climate change and vaccine efficacy.

Suppose NLP is trained on fake news, or weaponised to influence policymakers to go the wrong way on climate change, or launch military attacks on neighbouring countries.

In a recent survey, a third of AI researchers polled (total sample of 327) believed AI could potentially cause a global catastrophe. It's not hard to see this happening.

Feature Presentation: Ashish Narsale/Rediff.com

Devangshu Datta
Source: source image