
Elon Musk and dozens of other technology leaders have called on AI labs to stop developing systems that can compete with human-level intelligence.
– Advertisement –
in one open letter from the Future of Life Institute, signed by Musk, Apple Co-founders Steve Wozniak and 2020 presidential candidate Andrew Yang, the AI lab was urged to lock down a more powerful training model than GPT-4, the latest version of the large language model software developed by US startup OpenAI .
Related Investment News
Goldman estimates that AI could drive $7 trillion in global growth in 10 years. here’s how to play it
“Contemporary AI systems are now becoming human-competitors at common tasks, and we must ask ourselves: should we allow machines to fill our information channels with propaganda and untruths?” Read the letter.
– Advertisement –
“Should we automate all jobs, including completionists? Should we develop subhuman brains that may eventually outnumber, outsource, and replace us? Should we risk losing control of our civilization?”
“Such decisions should not be delegated to unelected technical leaders,” the letter said.
– Advertisement –
The Future of Life Institute is a non-profit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jan Tallinn.
The organization has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapon systems.
The institute said it is calling on all AI labs to “immediately stop training for AI systems more powerful than GPT-4 for at least 6 months.”
GPT-4, which was released earlier this month, is said to be much more advanced than its predecessor GPT-3.
“If such a pause cannot be implemented quickly, then governments should step in and establish a moratorium,” he added.
ChatGPT, the viral AI chatbot, has dazzled researchers with its ability to produce human responses to user prompts. By January, ChatGPT had amassed 100 million monthly active users just two months after its launch, making it the fastest growing consumer application in history.
The technology has been trained on vast amounts of data from the internet, and has been used to create everything from poetry in the style of William Shakespeare to crafting legal opinions on court cases.
But AI ethicists have also raised concerns with potential misuses of the technology, such as plagiarism and misinformation.
In a letter from the Future of Life Institute, technology leaders and academics said AI systems with human-rivaling intelligence pose “profound risks to society and humanity”.
“AI research and development must be focused on making today’s powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, trustworthy and loyal,” he said.
OpenAI was not immediately available for comment when contacted by CNBC.
OpenAI, which is backed by Microsoft, has reportedly secured a $10 billion investment from the Redmond, Washington technology giant. Microsoft has integrated the company’s GPT natural language processing technology into its Bing search engine to make it more interactive.
Google later announced its own competing conversational AI product for consumers, called Google Bard.
Musk has previously said that he thinks AI represents one of the “greatest risks” to civilization.
Tesla And the SpaceX CEO co-founded OpenAI in 2015 with Sam Altman and others, though he left OpenAI’s board in 2018 and no longer has a stake in the company.
He has criticized the organization several times recently, saying that he believes it is deviating from its original purpose.
As technology advances at a rapid pace, so do regulators racing to get a handle on AI tools. On Wednesday, the UK government published a white paper on AI, citing various regulators to oversee the use of AI tools in their respective sectors by enforcing existing laws.
Watch: OpenAI says its GPT-4 model can beat 90% of humans on the SAT