, author: Plackhin A.

Elon Musk and experts call for a halt in the development of AI models

They fear that AI models more powerful than GPT-4 could create "disastrous consequences for society."

Elon Musk and a team of artificial intelligence experts are calling on laboratories tasked with developing artificial intelligence models to suspend training for systems more powerful than the GPT-4. The tycoon and 1,100 others have signed an open letter assuring that these models could have "potentially disastrous consequences for society" and that they should only be created "after we are confident that their effects will be positive and their risks manageable."

In a letter signed by OpenAI co-founder Ilon Musk, along with industry executives such as Steve Wozniak, co-founder of Apple, or Emad Mostak, CEO of Stability AI, among others, they detail that artificial intelligence models are becoming competitive against humans. They question whether they really need to replace humans in a variety of tasks, including "controlling our civilization."

"Should we allow machines to flood our information channels with propaganda and lies? Should we automate all tasks, including satisfying ones? Should we develop non-human minds that can outnumber us, be smarter, obsolete, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technological leaders," the letter fragment reads.

Musk and a team of experts and industry leaders then ask for the same reason to halt for at least six months any training of AI systems more powerful than GPT-4. "This pause must be public and verifiable and include all key players. If such a pause cannot be imposed quickly, governments should step in and impose a moratorium" they state.

A six-month pause to create new security protocols for AI

The pause would be used, among other things, to develop a series of common security protocols that would be overseen by "independent outside experts." "These protocols should ensure that the systems that adhere to them are safe beyond a reasonable doubt." Musk and executives, on the other hand, argue that the move does not mean that AI development as a whole is temporarily halted. Instead, it is a "step backward" in the race to develop large-scale models.

The letter also details that AI model developers should work with policymakers to regulate AI through various measures. Among them is a requirement that images or any content created by AI be watermarked so that users can distinguish between a real photo or something synthetic. Also the creation of "well-resourced institutions to deal with the radical economic and political upheaval (especially for democracy) that AI will cause."
Interestingly, among the executives and experts who signed the letter, there is no Sam Altman, the top executive of OpenAI, or anyone else from the company that developed GPT-4. However, yes, there are Microsoft employees.

x