OpenAI news: Elon Musk and Tech Leaders Urge AI Pause over Societal Dangers-2023



Against OpenAI: Elon Musk and Tech Leaders Urge AI Pause over Societal Dangers

In an open letter, Elon Musk and a group of artificial intelligence experts and industry leaders citing societal dangers urged the development of systems more powerful than OpenAI‘s newly launched GPT-4. 


Earlier this month, Microsoft-backed OpenAI announced the fourth edition of its AI program GPT (Generative Pre-trained Transformer). The program wowed users by engaging them in human-like conversations, creating songs, and summarizing long documents. 

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” said the letter issued by the Future of Life Institute.

According to the European Union Transparency Register, the Musk Foundation is a major donor to nonprofits, as are London-based groups Founders Pledge and Silicon Valley Community Foundation.

“AI stresses me out,” Musk said earlier this month. He is one of the co-founders of industry leader OpenAI, whose automaker Tesla (TSLA.O) uses his AI for its autopilot systems. 

Musk, who has expressed frustration with regulators who have criticized efforts to regulate Autopilot, wants regulators to ensure that AI development serves the public good.


James Grimmelman, professor of digital and information law at Cornell University, said: “Considering how hard Tesla has battled accountability for flawed AI in self-driving cars, Elon Musk’s signing is a no-brainer. It’s very hypocritical,” he said. So “The break is a good idea, but the writing is vague and doesn’t take regulatory issues seriously.”

Tesla last month had to recall more than 362,000 U.S. vehicles to update its software after U.S. regulators said driver assistance systems could cause accidents. Just “anachronistic and just flat wrong!”


 OpenAI immediately responded to a request for comment on an open letter that suspended advanced AI development until common security protocols were developed by independent experts and urged developers to work with policymakers on governance. Did not respond.

“Should we allow machines to flood our information channels with propaganda and falsehood? Need to develop your mind?


The letter was signed by more than 1,000 of his people, including Musk. OpenAI CEO Sam Altman was not among the signatories to the letter. Alphabet and Microsoft CEOs Sundar Pichai and Satya Nadella are also not among the signatories.

Co-signers include Stability AI CEO Emad Mostaque, Alphabet’s DeepMind researcher Yoshua Bengio, an AI heavyweight often referred to as one of his “Godfathers of AI,” and an expert in the field. Including research pioneer Stuart Russell.

Concerns arise as ChatGPT draws attention from US lawmakers who question its implications for national security and education. EU police Europol warned on Monday that the system could be used for phishing, disinformation and cybercrime. Meanwhile, the UK government is proposing an “adaptive” regulatory framework for AI.


AI race

“The letter is not perfect, but the spirit is correct. We need to slow down until we better understand what that means,” said New York University professor Gary Marcus, who signed the letter.

“Big companies are increasingly hiding what they are doing, making it harder for society to protect itself from possible harm.” Since its release last year, OpenAI‘s ChatGPT has forced competitors to accelerate the development of similarly sized language models, with companies like Alphabet Inc (GOOGL.O) scrambling to make their products AI-immersed. Investors wary of relying on a single company are picking up OpenAI competitors.

Microsoft declined to comment on the letter, and Alphabet did not respond to calls or emails seeking comment.”A lot of the power to develop these systems has been constantly in the hands of few companies that have the resources to do it,” said Suresh Venkatasubramanian, a professor at Brown University and former assistant director in the White House Office of Science and Technology Policy. 

“That’s how these models are, they’re hard to build and they’re hard to democratize.”

Italy’s ChatGPT ban attracts EU privacy regulators

Leave a Reply

Your email address will not be published. Required fields are marked *