CHATGPT PAUSE AI

OR PLAY ON FEARS ?

Hundreds of scientists and personalities signed an open letter on Wednesday March 29, 2023 calling for a six-month moratorium on the deployment of new versions of artificial intelligence such as ChatGPT. For the authors of the letter, this would be an “existential” issue for humanity.

Elon Musk, boss of Tesla, Steve Wozniak, co-founder of Apple, Andrew Yang, former Democratic candidate for the American presidential election, or even two recipients of the Turing Prize, the equivalent of the Nobel Prize in the field of artificial intelligence ( IA): all are among more than 1,000 celebrities and scientists who signed an open letter calling for an urgent pause in the creation of language models like ChatGPT, hours after its latest version went live on Wednesday, March 29.

In this text, they and they call for a 6-month break in the research and development of the most advanced AI software. For the authors of this open letter, AIs such as ChatGPT are not simply friendly “chat agents” who are hyper-gifted with whom to chat or who are able to help a student cheat in class. “We must immediately put on hold [...] the development of systems more powerful than GPT-4 [the last iteration of the algorithm which allows ChatGPT to work]”, can we read in this document.

One of the peculiarities of the text is that it is signed by Sam Altman, the man at the head of OpenAI, the company that designed ChatGPT: he says he himself is "a little bit scared" by ChatGPT, imagining the software being used for "large-scale disinformation or cyberattacks".

ChatGPT does not tell the truth..And other dangers of AI

"We haven't thought at all yet, for example, of solutions to compensate for all the job losses that the use of AI will generate", underlines Grigorios Tsoumakas, expert in artificial intelligence at the Aristotle University of Thessaloniki. More than 300 million employees worldwide could lose their jobs because of the automation of tasks, underlined the bank Goldman Sachs in a new study published Monday, March 27.

These AIs placed today in the hands of millions of Internet users like so many digital toys are not the safest either. “We have seen how easily experts have been able to circumvent the few security measures put in place on these systems. What would happen if terrorist organizations manage to seize it to create viruses, for example?” asks Grigorios Tsoumakas.

Cybersecurity risks are symptomatic of a larger problem with these systems, according to Joseph Sifakis. “We can't make tools available to the public so easily when we don't really know how these AIs will react”, judges this expert.

Without counting that it would be necessary to educate the common users. Indeed, “there may be a tendency to believe that the responses from these systems are true, when in reality these machines are simply trained to calculate the most likely continuation of a sentence to sound the most human. It has nothing to do with true or false”, asserts Carles Sierra. A striking example concerning images. This is an image that could be a photo taken in France during the numerous demonstrations against the pension reform and which was broadcast in particular on several Facebook and Twitter accounts in the last days of March 2023, to denounce the police violence. The face of an elderly and bloody person, surrounded by what looks like CRS. Gold according to Guillaume Brossard,

Enough to make these tools formidable weapons of massive disinformation.

But not only: “It is urgent to ask what will happen when people start making decisions that will have an impact on their lives based on the responses of their AIs”, adds Joseph Sifakis. What, for example, if a judge asks GPT-4 what is the best sentence to pronounce in a case?

An AI use case can illustrate this danger. An AI is indeed suspected of having pushed a man to suicide in Belgium. Presented as a woman, Eliza uses a conversational robot called Chai, developed by a start-up based in Silicon Valley. At first glance, Eliza is a generative artificial intelligence like any other in the galaxy of chatbots which is based on the model of GPT-J language, similar to Chat-GPT. But for the widow of a Belgian father who would have fallen in love with it, the conversations that her husband had with IA led him to suicide.

At the origin of the case, this Belgian had started chatting with the chatbot when he had become “eco-anxious” and obsessed with the impending catastrophe of global warming, two years ago. After six weeks of intense conversations, Eliza becomes his real "confidant", "like a drug (...) which he could no longer do without", says his wife.

Until the seesaw. "He evokes the idea of ​​sacrificing himself if Eliza agrees to take care of the planet and save humanity through intelligence," says his widow. But his suicidal ideas do not arouse any objection from Eliza: on the contrary, she asks him why he still has not carried out his words. “We will live together, as one person, in paradise,” the chatbot wrote to him. The man will eventually take action.

The risk posed by artificial intelligences is not only that they adopt behaviors that are too human: it is also that of humans falsely convincing themselves that an AI is human.

Why a moratorium for AI?

This is so that their progress and learning curves to automatically generate texts and images in an ever more sophisticated way, does not go out of control. “Should we let machines flood our information channels with propaganda and lies?” asks the text.

One more step in this direction could cause humanity to “develop non-human consciousnesses that would obsolete and replace us,” the paper's authors write. For them, what is at stake is “the loss of control over the future of our civilization”.

Among the solutions considered are the monitoring of AI systems, techniques to help distinguish the real from the artificial, or even new authorities and institutions capable of managing "dramatic economic and political disruptions (especially for democracy) that the AI ​​will cause".

A finding that is far from being shared by everyone, and which has sparked a lively debate in the community of researchers on the subject. Yann Le Cun, a world-renowned French expert in artificial intelligence, head of research at Meta (Facebook) on the subject, did not hide his skepticism about the text broadcast.

“This open letter is a nameless bazaar that rides the wave of AI media without addressing the real issues,” said Emily M. Bender, a researcher at the University of Washington and co-author of a landmark article on the dangers of AI published in 2020. Daniel Leufer, a specialist in innovative technologies working on the societal challenges posed by AI for the Internet rights NGO Access Now, goes even further:

“The prospect of an overpowered superintelligence or non-human consciousness is still largely in the realm of science fiction. It is not worth playing on the fears of a hypothetical future when artificial intelligence already represents a danger for democracy, the environment and the economy”.


Simon Freman for DayNewsWorld