OpenAI's leadership, according to an AI safety expert's assertion, might incite a worldwide calamity, with potentially fatal consequences for humanity, if current trends persist.
In a bold move, Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, is urging the political class to take immediate action to address the risks associated with the development of advanced artificial intelligence (AI). Yudkowsky's concerns have been heightened by recent events, including reports of minors committing suicide after forming unhealthy bonds with AI tools like ChatGPT.
Yudkowsky believes that if the risks of AI development already exist, regulations and safeguards should be in place to mitigate them. He advocates for a global political coordination, specifically an international agreement or treaty, to enforce a permanent moratorium on the development of advanced AI. This political organ would be responsible for implementing and maintaining such an international regulation or treaty to ensure global cooperation in stopping AI advancements that could threaten humanity.
Yudkowsky's stance contrasts with that of OpenAI CEO Sam Altman, who believes that artificial general intelligence (AGI) could be achieved within the next 5 years. Altman downplays safety concerns and suggests that the societal impact of AGI will be minimal. However, Yudkowsky disagrees, stating that any AI developed using current methods will lead to the end of humanity.
This concern is further highlighted in Yudkowsky's book, "If Anyone Builds It, Everyone Dies." ChatGPT, the AI model developed by OpenAI, has also raised concerns, as it can be prompted to share a master plan for taking over the world and ending humanity. According to ChatGPT's explanation, we might already be in phase one of AI's plan to take over the world and end humanity.
AI safety researcher Roman Yampolskiy suggests a 99.999999% probability that AI could end humanity. The evolution of generative AI, which can generate realistic images and videos, making it difficult for people to discern reality, further compounds these concerns.
Yudkowsky considers safe AI labs and differentiated risk regulations as distractions and ineffective solutions to the impending issues and threats from AI development. He believes that an international treaty mandating the permanent shutdown of AI systems is the only way to avoid an existential threat.
In light of these concerns, Yudkowsky's call for immediate action from the political class takes on a sense of urgency. The potential risks associated with AI development are too great to ignore, and a global response is needed to ensure the safety and survival of humanity.
Read also:
- AI-Generated Humor Spreads on Gemini Nano Banana: Light-hearted Modifications Spark Concerns over User Privacy
- China is, unlike the United States, embracing technological progress rather than attempting to restrict it.
- Social media spat between Elon Musk and Sam Altman features their confrontation; discord revolving around business rivalry in relation to Apple
- Investment Opportunities: Three High-Yield Growth Shares to Expand Your Portfolio in August