Skip to main content

WHAT DO YOU KNOW ABOUT ARTIFICIAL GENERAL INTELLIGENCE?

Yesterday, I posted on CAN YOU CHAT WITH AN AI GOD?     That seems somewhat frivolous in the world of Artificial Intelligence when there are more serious existential threats.

  • It is debatable whether AGI has attained an operable state, but countries and companies are taking this quite seriously, as a prelude to ASI, or Artificial Super Intelligence, when machines surpass human capabilities on cognitive tasks, when we are outperformed across every domain by a wide margin.
  • When?  The timing of AGI ranges from very soon to the late 2020s to mid century to never.
  • The uncomfortable fear is that AGI represents an existential risk.
  • AI has already surpassed humans on a variety of language, understanding and visual benchmarks.
Time 
magazine recently had an AI Special Report, identifying the key players and cited new risks to an already unstable world.
  • One example, a simulated exercise conducted by Intelligence Rising about AI's impact of geopolitics.
    • Under a crystal chandelier in a high-ceilinged anteroom in Paris, the moderator of Intelligence Rising is reprimanding his players. These 12 former government officials, academics, and artificial intelligence researchers are here to participate in a simulated exercise about AI’s impact on geopolitics. But just an hour into the simulation, things have already begun to go south.
    • The team representing the U.S. has decided to stymie Chinese AI development by blocking all chip exports to China. This has raised the odds, the moderator says, of a Chinese invasion of Taiwan: the U.S. ally that is home to the world’s most advanced chip-­manufacturing plants. It is 2026, and the simulated world is on the brink of a potentially devastating showdown between two nuclear superpowers.
    • Why? Because each team is racing to create what’s known as artificial general intelligence, or AGI: an AI system so good, it can perform almost any task better, cheaper, and faster than a person can. Both teams believe getting to AGI first will deliver them unimaginable power and riches. Neither dares contemplate what horrors their rival might visit upon them with that kind of strength.
    • But in Paris, the prognosis is not looking good. Players—each skeptical of the others’ intentions—continue to race to be the first to create AGI, prioritizing investments in boosting AI’s capabilities rather than the slow and expensive task of safety research. Ultimately, some time in 2027, one team decides to deploy a powerful model even though they are not sure it is safe. The model kicks off a cycle of recursive self-­improvement, discovers cyber­vulnerabilities that allow it to escape human control, and eventually wipes out the human race using novel nanotechnology.
    • Although it’s not a happy ending, Intelligence Rising’s moderators have achieved their goal. They did not come to Paris to perfectly model the future. Instead, their objective was to communicate urgency. “I hope the players leave with a more visceral sense of how fast things can go catastrophically wrong,” says Ross Gruetzemacher, the game’s moderator, who is also a professor at Wichita State University. “And how little room for error we have to get things right.”
  • Sam Altman, the CEO of ChatGPT maker Open­AI, expects the first AGI to be created during President Trump’s second term in office. OpenAI, Google, Microsoft, Amazon, and Meta are ­together funneling hundreds of billions of dollars—the equivalent cost in today’s dollars of a dozen Manhattan Projects per year—into the construction of huge data centers where they believe AGI will be summoned into existence.
  • In the headlong rush for technological supremacy, strange new risks are being created. Just as nuclear scientists were unsure whether the first atomic blast would ignite the earth’s atmosphere, today’s AI researchers can’t say whether smarter-than-human computers would be friends or foes. There’s a chance, some believe, that superhuman intelligence might escape human control entirely. If a runaway AGI wanted to harvest our oxygen, electricity, and carbon for its own purposes, there might be nothing we could do to stop it. In this way, some scientists fear, the winner of the race to AGI might be neither the U.S. nor China, but rogue AI itself, spelling the end of human civilization.
Elon Musk, known for his ambitious ventures in technology and space exploration, has recently made a striking announcement regarding Tesla's AI robot project, often referred to as Optimus. This endeavor was anticipated to revolutionize human-machine interactions. However, following an alarming revelation, the excitement around this technological marvel quickly gave way to serious concern.
I asked Google AI Overview, who are the concerned AI leaders, and which ones want to prioritize rapid innovation:
Leaders promoting caution and safety
Some of the most prominent voices in AI promoting careful development have emphasized the technology's potential for profound, unintended risks.
  • Geoffrey Hinton: Known as one of the "godfathers of AI," Hinton left his role at Google to speak freely about the dangers of AI. He has warned that intelligent machines could ultimately pose an existential threat to humanity and that future AI versions could exhibit unexpected and harmful behaviors.
  • Elon Musk: The CEO of Tesla and X, who was also a co-founder of OpenAI, has been a long-time vocal advocate for AI regulation. Along with over 1,000 tech leaders, he signed an open letter in 2023 calling for a pause on large-scale AI experiments to allow for the development of robust safety protocols.
  • Sam Altman: The CEO of OpenAI has publicly acknowledged the risks posed by powerful AI systems, stating that people should be "a little bit scared" and that regulations are necessary to help avoid negative consequences.
  • Bill Gates: The Microsoft co-founder has urged that AI be tested carefully and properly regulated. He co-signed a statement from the Center for AI Safety warning of potential extinction risks if the technology is not properly controlled. 
Leaders prioritizing rapid innovation
Other prominent leaders and industry figures stress the importance of rapid development to stay competitive and harness AI's benefits, sometimes downplaying or expressing less alarm about the catastrophic risks emphasized by others.
  • Yann LeCun:
     Meta's Chief AI Scientist has often been a counterpoint to some of the more extreme warnings about AI safety. He has publicly criticized the "doomer" mentality of existential AI risk and advocated for a more open approach to AI development.
  • Mark Zuckerberg: The CEO of Meta has emphasized the need for rapid innovation and has been less focused on the long-term existential threats of AI compared to competitors like OpenAI. Meta's open-source approach to AI development is a reflection of this prioritization of speed and widespread access over slower, more cautious, and centralized deployment.
  • Marc Andreessen: A prominent venture capitalist and co-founder of Netscape, Andreessen has been a very vocal critic of the AI safety movement. He published an essay arguing that AI will "save the world" and that the fears about AI are based on unfounded, "panic-driven" anxieties. 

-

Comments

Popular posts from this blog

THE ENIGMATIC PHIL SPECTOR

The first presidential debate of Donald Trump and Joe Biden ended up in a near tie.  Both lost.  However, it was an unmitigated disaster for Biden, who just might be too old to win this re-election. For Trump, it was a reinforcement of what he does all the the time, lie.   There will be significant calls for the Democratic Party to work out "something" to replace Biden as their presidential candidate.  Suddenly, Kamala Harris, Gavin Newsom and Michelle Obama are added to the spotlight.  But what can "legally" occur at the August Democratic Convention? The situation is different on the Republican side, as Trump is the Republican Party, and no matter if he gets 4 years at his felony sentencing on July 9, or even if the Supreme Court determines he is not immune next week or later, he will be the presidential candidate. Trump is a damned boastful liar and convicted felon, but that is the only option for Republicans.  His vice-presidential choice now become...

THE TRUMP ENERGY PROGRAM

From  Time  magazine, I begin with a slew of Trump topics.  You can read the details. The unpopular Big Beautiful Bill is now in the House . The only truly effective anti-Trump person:  Elon Musk. The Trump Gaza ceasefire proposal . The July 4th Free American Anti-Trump Protest planned across the USA . This site began as a renewable energy and environment blog, and has evolved to just about any subject.  I try to keep Wednesdays for sci-tech, with perhaps a monthly focus on energy.  More recently, I've drawn from the  Energy Matters  info sent to me by the American Energy Society.  I'm inserting direct quotes this time to eliminate my predilections for more credibility. This service starts with some broad topics. - Fossil fuels: Helium is locked in a supply crunch, and prices are surging. - Renewables: Congress will probably pass new renewable fuel standards for 2026 and 2027. - Policy: President Trump is now focused on Califor...

OSAKA EXPO: Day One

Well, the day finally came for us to go to the Osaka Expo.  We were told ahead of time that the long walks would be fearful, giant lines will need to be tolerated just to get into the Expo, with those ocean breezes, it would really be cold, and so forth. Maybe it was pure luck, but we avoided all the above warnings  We had a grand day, and are looking forward to Sunday, our second day at the Expo.  So come along for an enjoyable ride. Our hotel is adjacent to the Tennoji Station, a very large one with several lines.  We upgraded our Suica card and caught the Misosuji red line towards Umeda. Transferred to the Chuo green line at the Hommachi Station.  This Osaka Metro train took us to the Yumeshima Station at the Expo site.   It was a very large mob leaving the train and heading to the entrance. Took only a few minutes to get to the entrance.  This mob was multiplied by at least a factor of  ten of those already waiting to enter.  However...