Yesterday, I posted on CAN YOU CHAT WITH AN AI GOD? That seems somewhat frivolous in the world of Artificial Intelligence when there are more serious existential threats.
- Ongoing is a race for Artificial General Intelligence (AGI) that is beginning to impact on the geopolitical world as First Strike and the resultant Nuclear Winter did during the late Cold War in the mid-1980s.
- It is debatable whether AGI has attained an operable state, but countries and companies are taking this quite seriously, as a prelude to ASI, or Artificial Super Intelligence, when machines surpass human capabilities on cognitive tasks, when we are outperformed across every domain by a wide margin.
- When? The timing of AGI ranges from very soon to the late 2020s to mid century to never.
- The uncomfortable fear is that AGI represents an existential risk.
- AI has already surpassed humans on a variety of language, understanding and visual benchmarks.
- From Wikipedia: Creating AGI is a primary goal of AI research and of companies such as OpenAI,[6] Google,[7] xAI,[8] and Meta.[9] A 2020 survey identified 72 active AGI research and development projects across 37 countries.[10]
Time magazine recently had an AI Special Report, identifying the key players and cited new risks to an already unstable world.
- One example, a simulated exercise conducted by Intelligence Rising about AI's impact of geopolitics.
- Under a crystal chandelier in a high-ceilinged anteroom in Paris, the moderator of Intelligence Rising is reprimanding his players. These 12 former government officials, academics, and artificial intelligence researchers are here to participate in a simulated exercise about AI’s impact on geopolitics. But just an hour into the simulation, things have already begun to go south.
- The team representing the U.S. has decided to stymie Chinese AI development by blocking all chip exports to China. This has raised the odds, the moderator says, of a Chinese invasion of Taiwan: the U.S. ally that is home to the world’s most advanced chip-manufacturing plants. It is 2026, and the simulated world is on the brink of a potentially devastating showdown between two nuclear superpowers.
- Why? Because each team is racing to create what’s known as artificial general intelligence, or AGI: an AI system so good, it can perform almost any task better, cheaper, and faster than a person can. Both teams believe getting to AGI first will deliver them unimaginable power and riches. Neither dares contemplate what horrors their rival might visit upon them with that kind of strength.
- But in Paris, the prognosis is not looking good. Players—each skeptical of the others’ intentions—continue to race to be the first to create AGI, prioritizing investments in boosting AI’s capabilities rather than the slow and expensive task of safety research. Ultimately, some time in 2027, one team decides to deploy a powerful model even though they are not sure it is safe. The model kicks off a cycle of recursive self-improvement, discovers cybervulnerabilities that allow it to escape human control, and eventually wipes out the human race using novel nanotechnology.
- Although it’s not a happy ending, Intelligence Rising’s moderators have achieved their goal. They did not come to Paris to perfectly model the future. Instead, their objective was to communicate urgency. “I hope the players leave with a more visceral sense of how fast things can go catastrophically wrong,” says Ross Gruetzemacher, the game’s moderator, who is also a professor at Wichita State University. “And how little room for error we have to get things right.”
- Sam Altman, the CEO of ChatGPT maker OpenAI, expects the first AGI to be created during President Trump’s second term in office. OpenAI, Google, Microsoft, Amazon, and Meta are together funneling hundreds of billions of dollars—the equivalent cost in today’s dollars of a dozen Manhattan Projects per year—into the construction of huge data centers where they believe AGI will be summoned into existence.
- In the headlong rush for technological supremacy, strange new risks are being created. Just as nuclear scientists were unsure whether the first atomic blast would ignite the earth’s atmosphere, today’s AI researchers can’t say whether smarter-than-human computers would be friends or foes. There’s a chance, some believe, that superhuman intelligence might escape human control entirely. If a runaway AGI wanted to harvest our oxygen, electricity, and carbon for its own purposes, there might be nothing we could do to stop it. In this way, some scientists fear, the winner of the race to AGI might be neither the U.S. nor China, but rogue AI itself, spelling the end of human civilization.
Elon Musk, known for his ambitious ventures in technology and space exploration, has recently made a striking announcement regarding Tesla's AI robot project, often referred to as Optimus. This endeavor was anticipated to revolutionize human-machine interactions. However, following an alarming revelation, the excitement around this technological marvel quickly gave way to serious concern.
I asked Google AI Overview, who are the concerned AI leaders, and which ones want to prioritize rapid innovation:
Leaders promoting caution and safety
Some of the most prominent voices in AI promoting careful development have emphasized the technology's potential for profound, unintended risks.
- Geoffrey Hinton: Known as one of the "godfathers of AI," Hinton left his role at Google to speak freely about the dangers of AI. He has warned that intelligent machines could ultimately pose an existential threat to humanity and that future AI versions could exhibit unexpected and harmful behaviors.
- Elon Musk: The CEO of Tesla and X, who was also a co-founder of OpenAI, has been a long-time vocal advocate for AI regulation. Along with over 1,000 tech leaders, he signed an open letter in 2023 calling for a pause on large-scale AI experiments to allow for the development of robust safety protocols.
- Sam Altman: The CEO of OpenAI has publicly acknowledged the risks posed by powerful AI systems, stating that people should be "a little bit scared" and that regulations are necessary to help avoid negative consequences.
- Bill Gates: The Microsoft co-founder has urged that AI be tested carefully and properly regulated. He co-signed a statement from the Center for AI Safety warning of potential extinction risks if the technology is not properly controlled.
Leaders prioritizing rapid innovation
Other prominent leaders and industry figures stress the importance of rapid development to stay competitive and harness AI's benefits, sometimes downplaying or expressing less alarm about the catastrophic risks emphasized by others.
- Yann LeCun: Meta's Chief AI Scientist has often been a counterpoint to some of the more extreme warnings about AI safety. He has publicly criticized the "doomer" mentality of existential AI risk and advocated for a more open approach to AI development.
- Mark Zuckerberg: The CEO of Meta has emphasized the need for rapid innovation and has been less focused on the long-term existential threats of AI compared to competitors like OpenAI. Meta's open-source approach to AI development is a reflection of this prioritization of speed and widespread access over slower, more cautious, and centralized deployment.
- Marc Andreessen: A prominent venture capitalist and co-founder of Netscape, Andreessen has been a very vocal critic of the AI safety movement. He published an essay arguing that AI will "save the world" and that the fears about AI are based on unfounded, "panic-driven" anxieties.
-
Comments
Post a Comment