
The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.
Let us now assume, for the sake of argument, that [intelligent] machines are a genuine possibility, and look at the consequences of constructing them... There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon.[25]
- In 2014, famed physicist Stephen Hawking, alongside leading AI researchers Max Tegmark and Stuart Russell, warned about superintelligent AI systems “outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”
- 2014 Nick Bostrom published Superintelligence, further sparking this existential threat. Also. the Future of Life Institute (FLI), Stuart Russell, Roman Yampolskiy, Elon Musk and Bill Gates began expressing worry about Artificial Intelligence (AI).
- 2016 ...the journal Nature warned: "Machines and robots that outperform humans across the board could self-improve beyond our control—and their interests might not align with ours".[37]
- 2017 FLI released Slaughterbots, a film about a dystopian world featuring AI weaponry.
- 2020 Brian Christian published The Alignment Problem, about the building of artificial intelligence systems aligned with human values.
- 2023 OpenAI leaders suggested that superintelligence may be achieved in less than 10 years.
- More recently, Nick Bostrom, founder of the Future of Humanity Institute at the University of Oxford, has said, comparing AI with the human brain:
- Speed of computation: biological neurons operate at a maximum frequency of around 200 Hz, compared to potentially multiple GHz for computers.
- Internal communication speed: axons transmit signals at up to 120 m/s, while computers transmit signals at the speed of electricity, or optically at the speed of light.
- Scalability: human intelligence is limited by the size and structure of the brain, and by the efficiency of social communication, while AI may be able to scale by simply adding more hardware.
- Memory: notably working memory, because in humans it is limited to a few chunks of information at a time.
- Reliability: transistors are more reliable than biological neurons, enabling higher precision and requiring less redundancy.
- Duplicability: unlike human brains, AI software and models can be easily copied.
- Editability: the parameters and internal workings of an AI model can easily be modified, unlike the connections in a human brain.
- Memory sharing and learning: AIs may be able to learn from the experiences of other AIs in a manner more efficient than human learning.
- In March 2023, the Future of Life Institute issued an open letter asking AI labs to pause giant AI experiments.
- Two months later, hundreds of prominent people signed onto a one-sentence statement on AI risk asserting that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
- A danger about the exponential growth in AI achievements is that imperceptible change can suddenly blossom into transformative AI capabilities, not unlike the following example:
- However, Meta's Chief AI Scientist Yann LeCun in 2024, remarked that it “is not going to be an event… It is going to take years, maybe decades… The history of AI is this obsession of people being overly optimistic and then realizing that what they were trying to do was more difficult than they thought.”
-
Comments
Post a Comment