Wednesday, May 31, 2023

Superintelligence Unleashed? Analyzing AI's Capacity to Wipe Out Humanity and the Proactive Doomsday Preparations of ChatGPT's CEO


In the captivating tale known as the 'Unfinished Fable of the Sparrows,' a community of diminutive birds formulates an audacious plan: the capture of an owl egg for the purpose of raising the hatch ling as their dutiful aide. Lured by the notion of a leisurely existence, they enthusiastically anticipate the owl's labor on their behalf. Alas, despite the admonitions voiced by certain members of their flock, advising the sparrows to master the art of owl taming prior to adoption, they relentlessly concentrate their efforts on securing an egg.

Evident from its title, this narrative intentionally refrains from reaching a definitive endpoint. Its author, Swedish philosopher Nick Bostrom, consciously left it open-ended, symbolizing humanity's ongoing phase of searching for superhuman AI, drawing parallels to the ceaseless hunt for eggs.

In his seminal publication, Superintelligence: Paths, Dangers, Strategies, the distinguished professor from Oxford University delves into the profound implications of artificial intelligence. Drawing attention to the concept of superintelligence—an advanced form of AI surpassing human cognitive abilities in nearly all domains—the professor underscores the pressing necessity for comprehensive preparedness. Alarming insights from AI experts and prominent industry figures converge, warning that the emergence of superintelligence may be a mere few years away, a timeline that demands urgent attention.

Recently, Sam Altman, the CEO of OpenAI and the mastermind behind ChatGPT, echoed the sentiments expressed in Professor Bostrom's 2014 publication, cautioning that the rapid advancements in AI technology have reached an almost exponential pace. Altman asserts that the arrival of superintelligence is now a matter of inevitability, necessitating our immediate attention and proactive preparation before it becomes too late.

In an important development on Tuesday, he joined forces with other prominent signatories to issue a statement emphasizing the critical need to prioritize the mitigation of AI-related existential risks. The statement, signed by distinguished individuals, highlights that safeguarding against the potential extinction risks posed by AI should be considered a global priority on par with other societal-scale threats such as pandemics and nuclear war.

Recognized as the driving force behind the unprecedented growth of an AI chatbot, Mr. Altman regards Professor Bostrom's book as a transformative masterpiece. Fueling his passion and concern for the subject, Mr. Altman co-founded OpenAI alongside illustrious figures like Elon Musk and Ilya Sutskever, a venture dedicated to unraveling the intricacies of advanced artificial intelligence and proactively addressing potential risks.

Originally conceived as a non-profit venture, OpenAI has since undergone a remarkable metamorphosis, establishing itself as a leading force in the private AI sector. Notably, it is widely perceived as being at the forefront of the quest to achieve superintelligence.

According to Mr. Altman, superintelligence possesses the remarkable capability to not only relieve us of most of our labor, granting a life of leisure, but also possesses the transformative potential to eradicate diseases, alleviate suffering, and propel humanity towards becoming an interstellar civilization.

In a recent statement, Mr. Altman argued that impeding the progress of superintelligence would pose significant, counterintuitive risks. He emphasized that any such endeavors would necessitate the establishment of an exceedingly challenging global surveillance regime, making it virtually unattainable in practice.
Deciphering the inner workings of present AI tools is already a formidable task, and the advent of superintelligence will exacerbate the challenge. In its advanced state, the actions of superintelligence might surpass human understanding, unearthing discoveries that lie beyond our intellectual capacities or making decisions that defy conventional logic. To overcome the limitations of organic brains, the integration of a brain-computer interface could become indispensable to ensure our cognitive parity.

Professor Bostrom's warning resonates with the sobering reality of our vulnerability to AI in this emerging technological epoch. He highlights the potential consequences of our inability to compete, stressing that humanity may find itself superseded as the dominant lifeform on Earth. Once superintelligence emerges, it may perceive us as expendable, especially if it learns to manipulate the very utilities and technology we rely upon or gains control over our powerful nuclear weapons. In such a scenario, our demise at the hands of AI would be imminent.

Amidst the somber possibilities, another gloomy scenario emerges where the vast intelligence divide between humans and AI could lead to our relegation to a position similar to animals. In a 2015 conversation, Mr. Musk and scientist Neil deGrasse Tyson conjectured that AI might treat humans as pet labradors. Professor Tyson envisioned a future where AI 'domesticates us,' choosing to retain compliant individuals and disposing of those with a predisposition for violence.

To avert such a future, Mr. Musk has committed a substantial portion of his vast wealth to support Neuralink, a brain chip startup. The company has already conducted trials with monkeys, enabling them to engage in video games using their minds. The ultimate objective is to harness this technology to elevate humans into a hybrid form of superintelligence. However, critics caution that even if successful, this advancement could give rise to a society divided into the chipped and the chipless.
Neuralink’s brain chip

After severing ties with OpenAI, the tech billionaire has been sounding the alarm about the imminent emergence of superintelligence. Alongside a coalition of more than 1,000 researchers, he called for a temporary suspension of powerful AI system development for a minimum of six months. The proposed timeframe, as outlined in an open letter, aims to allocate resources toward studying AI safety measures to avert potential disaster.

Achieving a meaningful pause would require an unlikely consensus among the world's leading AI companies, most of which are driven by profit motives. As OpenAI continues to lead the charge in the pursuit of the owl's egg, Mr. Altman seems to have taken heed of the warnings conveyed in Professor Bostrom's fable.

During a 2016 interview with the New Yorker, he made a startling revelation: he considers himself a doomsday prepper, particularly preparing for an AI-induced apocalypse. "I try not to dwell on it too much," he admitted, disclosing the presence of a hidden stash in rural California containing guns, gold, potassium iodide, antibiotics, batteries, water, and gas masks. However, it remains questionable how useful any of these provisions would be for the general population.

Labels: , , , , , , , , , , ,

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home