Some prominent personalities in the world of technology, such as Elon Musk, signed an open letter in which they called for a pause in “giant artificial intelligence experiments developed by companies such as Amazon, Facebook, and Google.
In addition, many people expressed their surprise that Elon Musk was part of the initiative, since the new owner of Twitter shared some tweets a few weeks ago in which he claimed that the platform would implement AI to monitor and detect manipulation in public opinion.
In the months ahead, we will use AI to detect & highlight manipulation of public opinion on this platform.
Let’s see what the psy ops cat drags in …
— Elon Musk (@elonmusk) March 18, 2023
In the letter, the experts suggest that the hasty advancement of AI is leading to “a tipping point in the history of civilization, which would necessitate urgent measures to avoid potentially catastrophic consequences. The letter was published by the Future of Life Institute, a non-profit organization backed by Musk.
Who else signed the letter?
Elon Musk was not the only one to express his concern about the risks of AI, but other tech influencers also signed the letter:
- Emad Mostaque, CEO of Stability AI.
- DeepMind’s researchers owned by Alphabet Company.
- Yoshua Bengio who is commonly known as one of the “godfathers of AI”.
- Stuart Russell, who is a pioneer in AI research.
While Open AI CEO Sam Altman did not sign the letter, neither did Sundar Pichai or Satya Nadella of Alphabet and Microsoft, according to information gathered by CNN.
Elon Musk in 2014 also mentioned to MIT students that he believed people had to be very careful with AI, considering it the “greatest existential threat.”
Musk expressed his fear again in 2018 when, in an interview with Kara Swisher, he commented, “I think we need to be very careful about AI moving forward”
The letter suggests that AI experiments could have unintended and dangerous consequences that “no one can reliably understand, predict or control.”
Thus, it includes the creation of autonomous weapons, manipulation of public opinion, and, if left unchecked, AI could endanger humanity.
AI could contribute to disinformation
Some experts who signed the letter also commented on the risks of propaganda, fake news, or lies spreading through AI-generated articles or images.
Much of the creation by artificial intelligence can appear so real that many people would not be able to identify it.
The letter also calls on governments, businesses, and society in general and urges them to take precautions against the risks of AI.
Some of the suggestions are: creating a body to regulate the development and oversight of AI, promoting greater transparency and accountability in companies working with AI, and educating the public about the potential risks and benefits of artificial intelligence.
“This does not mean a pause in the development of AI in general, but a step back in the dangerous race toward ever-larger, unpredictable black box models with emerging capabilities,” the letter says.