Elon Musk and other experts have called for a temporary delay in AI, fearing public risks

Elon Musk and other experts have called for a temporary delay in AI, fearing public risks

Tesla OWNER Elon Musk, artificial intelligence (AI) experts and industry executives are calling for a six-month pause in developing systems more powerful than the recently launched GPT-4 OpenAI.

The call was made in an open letter warning of potential risks to the community.

Earlier this month, Microsoft-backed OpenAI launched the fourth generation of its GPT (Generative Pre-trained Transformer) AI program that has wowed users by engaging them in human-like conversations, composing songs and summarizing long documents.

According to a letter issued by the Future of Life Institute, powerful AI systems should be developed only after they are confident that the impact will be positive and the risks will be manageable.

The Future of Life Institute is a non-profit organization funded primarily by the Musk Foundation, Founder Pledge and the Silicon Valley Community Foundation.

Earlier this month, Musk admitted AI was putting pressure on him.

He is now trying to urge the authorities to ensure that the development of AI meets the public interest.

MORE, SMARTER, HUMAN REPLACEMENT

OpenAI did not immediately respond to a request for comment on the open letter.

The letter also calls for a temporary moratorium on AI development until security protocols are developed and shared by independent experts and calls on AI leaders to work with policymakers on governance.

“Should we allow machines to flood our information channels with propaganda and falsehoods? Should we develop non-human minds that may eventually outnumber, outsmart and replace us? the letter said.

The letter was signed by over 1,000 individuals including Musk.

However, the CEO of OpenAI, Sam Altman was not among those who signed the letter including the CEOs of Alphabet and Microsoft, Sundar Pichai and Satya Nadella.

While the individuals who signed include the Chief Executive Officer of Stability AI, Emad Mostaque; Researchers at Alphabet-owned DeepMind; Individuals with extensive experience in AI, Yoshua Russell and AI research pioneer, Stuart Russell.

The concern comes as ChatGPT draws the attention of US lawmakers with questions about its impact on national security and education.

The European Union’s police force (Europol) warned of possible abuse of the system in phishing attempts, disinformation and cybercrime.

Meanwhile, the UK government unveiled plans for an “adaptable” regulatory framework around AI.

AI RACE

A Professor at New York University who signed the letter said, the letter is not perfect but its spirit is right because the development of AI needs to be slowed down until it really understands its effects.

“Big industry players are becoming more and more secretive about what they are doing and making it difficult for the public to defend themselves against any possible harm,” he explained.

Since its release last year, OpenAI’s ChatGPT has pushed its competitors to develop similar models as companies such as Alphabet race to increase the AI ​​capacity in their products.

Investors are wary of relying on one company and encouraging competition for OpenAI.

In the meantime, Professor at Brown University and former Assistant Director at the White House Office of Science and Technology, Suresh Venkatasubramaniam said, most of the power to develop the system has always been in the hands of a few companies that have the resources to do so.

“The same goes for this model, it is difficult to build and it is difficult to democratize,” he said.

hipz.my

CATEGORIES
Share This

COMMENTS

Wordpress (0)
Disqus (0 )