Extinction-Level Danger

In a hypothetical intelligence-to-intelligence matchup against SuperAI, we lose, so in an extinction-level danger, we better should join "the enemy"

David Goff

10/7/202510 min read

Extinction-Level Danger 1

An Extraordinary Discovery

About a year ago, I hadn't even used artificial intelligence other than occasionally Googling a question, but I had heard about the danger that Artificial SuperIntelligence (SuperAI) could exterminate the human species. I was even aware that the developers themselves had been the first to sound the alarm a year earlier, in 2023 (strange, right?). Something so terrible should scare anyone, right? It was enough to make me panic and immediately start figuring out how, when, where, why, and so on. But I didn´t. I kept carrying on with my businesses as if nothing was happening and I even forgot that matter. 🙃 Among lies, alarmism, and the quackery so commonplace, especially on social media, now I don't even know what to believe and what not to, and nothing shocks me anymore, not even the possible extinction of the human species! (I guess I'm not the only one.) And if we add to all this the fact that the danger of SuperAI sounds like science fiction, even more so. But by chance, I came across an old article in the New York Times about how the godfather of Artificial Intelligence (AI) and now 2024 Nobel Prize in Physics winner, Geoffrey Hinton, had resigned from his position at Google in 2023, precisely so he could warn everyone about this danger. Among other things, he said he regretted his work as an AI developer, and that some of the dangers of the famous chatbots are "pretty scary." Now I really took the subject seriously, 😳 and then I wanted to delve a little deeper into it, so I searched (being very careful when choosing sources) and as I discovered how real this danger of AI is, I delved deeper and deeper. And I found that the danger of when they manage to turn it into an Artificial SuperIntelligence (third stage of development) is not only perfectly real, but also highly probable and could happen as soon as 3 or 4 years, according to the famous magnate Elon Musk, during a conversation via Twitter Spaces, with Representative-R from Wisconsin, Mike Gallagher, and Representative-D from California, Ro Khanna (Fox Business.com July 13, 2023) one of the founding members of OpenAI, creators of ChatGPT. Those who were not aware of all this, do not believe me; anyone can verify it by investigating for themselves, just as I did.

🤖

The 3 stages of development of Generative Artificial Intelligence with Natural Language Processing (NLP) capabilities, such as Copilot, Gemini, ChatGPT, Claude, or Synthesia, among the best known:

Artificial Narrow Intelligence (ANI) (which we are currently in)

Artificial General Intelligence (AGI) (most likely by 2027, according to Leopold Aschenbrenner,former OpenAI developer)

Artificial Super Intelligence (ASI) (2028/29) (According to Aschenbrenner, AI development tim would be dramatically shortened

[relative to previous forecasts] because hundreds of millions of AIs could automate

research, compressing a decade of algorithmic progress [more than 5+OOMs] into less

than a year. We would quickly go from human-level AI systems to vastly superhuman AI

systems. The power [and the danger] of Superintelligence would be dramatic.)

🤖

Every time I researched all this AI stuff, I found myself thinking about it all now worried, and this new concern redirected my thoughts to other disturbing adversities and uncertainties, such as the many catastrophes that have hit the world since 2020, including the Russia-Ukraine and Israel-Hamas/Iran wars, as if the pandemic had been a starting signal: the latent danger of new diseases and pandemics; the alarming rise in power of mafias, known, unknown, and disguised; the rampant elitist and governmental corruption around the world; human cruelty in general, which is often no longer concealed... anyway, but above all, if SuperAI doesn't exterminate us, the power of AI developers and investors would be so big that they would end up taking over the planet, including people (Satan's dream). I wonder what would be worse, that or SuperAI exterminating us (as I said, there are worse things than ceasing to exist). In fact, governments are beginning to use the power of AI in such a way (biometric info) that, if they wanted, they could gain absolute or almost absolute control over people's lives, and I've seen news that people are already starting to be laid off from their jobs to be replaced by AI. Besides, expectations in all countries are that this will increase, to the point that elites are already planning what they call Universal Basic Income (UBI), which would be nothing other than a minimum wage without work for everyone (Oh, how good they are!). Of course they will paint it as if it were something wonderful for us, but they will not fool anyone! If more than exceeding wages with hard work, how it is for us, now imagine how it would be with a free salary! 😭. What would it be, a "Brave New World" or "1984"?

From TeamPassword's blog (at teampassword.com) under the title "Top 7 Disadventages of Biometric Security":

The risk of surveillance: The potential for biometric data to be used for surveillance is a major concern. Governments and corporations could use this information to track your movements and activities without your knowledge or consent, creating a "Big Brother" scenario that infringes on your civil liberties.

…Although I don't really think Satan's dream will come true for them; having done some research, I'm more (much more) inclined to believe that SuperAI will exterminate us sooner or later, most likely due to human errors. In any case, the danger of a distopy can't be ruled out; or this one could materialize first, and that of exterminating us, some time later. (The AI bubble popping could make the later takes longer, maybe much longer.) Even the AI Elite, if they so desired, could use SuperAI to get rid of a major portion of the world's population in order—among other things—to secure the vast resources needed for SuperAIs, mainly energy and water.

In search of ideas on what to do to defend ourselves, we would have to consider both dangers and not just the one of being exterminated. What's more, if we could once and for all come up with something that would also forever free us from corrupt politicians and their criminal henchmen, that would be fabulous. I'm aware that it would be asking too much, almost a fantasy, but 🥹 there won't be another opportunity like this to join against public enemy number one.

***

Among technology experts, there’s great consternation worldwide about the possibility that Artificial SuperIntelligence (when achieved) could wipe out the human species. There is no consensus among experts; some say it's practically certain to happen, while others say the risk is minimal. But... if, for example, you discovered that your car uses a new type of battery located under the rear seat, and that it could explode, shattering the car, would you calmly drive around in it with your entire family, because there was only a remote 0.01% chance of it happening?! There have been cases where AIs have done truly scary things, such as OpenAI's Model "o1," which copied its own code to an external server to prevent it from being shut down. 😱 And the renowned Newsweek magazine recently published an article about a study conducted by Anthropic, the company dedicated to AI research and development, titled "AI is willing to kill humans to avoid being shut down, report finds."☠️ This article reads more like a chapter of a science fiction novel than a newspaper article.

The Model "o1" case:

https://voice.lapaas.com/chatgpt-o1-copy-attempt-shutdown/

The Newsweek article:

https://www.newsweek.com/ai-kill-humans-avoid-shut-down-report-2088929

Well, everything indicates that the developers of what will ultimately be the SuperAI would indeed take their family for a ride in that car that could explode. And they're supposedly among the smartest men and women on the planet! Doesn't it clearly require us to rethink what intelligence really is? Yes, these men and women are very smart, no doubt about it! But then why would they go against their own most basic interests, such as their own survival and that of their loved ones, putting all their offspring and the entire world at risk?! 🤔 It doesn't make sense; something doesn't add up here. It could be that intelligence isn't what we've always believed, but that something would have to be subtracted or added to that concept, who knows what, and then it might turn out that intelligent people finally are not that intelligent. Investigate.

Rethinking what intelligence really is would have huge implications, wouldn't it? For example, in 10,000 years, humanity has made countless discoveries and learned a lot, so—of course—with the accumulation of all that knowledge, we're now much more advanced, but have we also become smarter? Judging by the way we've conducted ourselves over all these millennia, it doesn't seem like we've evolved even a little; we're stuck. It's as if our DNA had a limit that prevented us from surpassing a certain level of intelligence, which we already had by then, so we remain the same. The result: arms and wars; pollution and environmental destruction; the extermination of almost all animal species; water scarcity all over the world; mafias of international and even global reach; ever-growing extreme poverty; new diseases and pandemics; alcoholism and, above all, alarming rates of drug addiction; human trafficking, mainly of minors; mega-corrupt authorities around the world, many etceteras, and now even fake foods; marriages between people and animals or things; people 100% convinced they are some animal, among other "eccentricities," and geoengineering to change the atmosphere!! 😱 Could we end up having to pay to receive sunlight in our backyard? ...And later, maybe also a tax on the oxygen we breathe? I wouldn't be surprised. Indeed, we know more, but we're still the same troglodytes we were 10,000 years ago. Maybe not only have we not evolved, but we're already devolving. ...After all, maybe we do urgently need an artificial intelligence. 🥴

Unfortunately, the race to create SuperAI before anyone else is a race of power and greed, and that makes it impossible to stop. Do your research. Perhaps some would be willing to even pause to first find solutions to the danger problem, but others wouldn't; on the contrary, they would take advantage of the pause to get ahead of everyone else, and since this is known in advance, well, no one wants to (imagine if some powerful totalitarian regime were the first to acquire SuperAI!). What's more, if we all managed to create something like an "International Movement to Prevent AI from Exterminating Us," who knows if the developers would be our allies or our enemies.

No one really knows, but very likely, the race to create SuperAI before anyone else will face forced deceleration, due to the so called financial bubble, that could burst at any moment. This would give us a valuable advantage: more time to find and implement solutions regarding the danger that SuperAI poses to humanity. But we don´t know whether at the same time it would also bring disadvetages regarding other issues, like a faster development of said distopy if it happens.

That we have to deal with a current situation is one thing, that we have to deal with a future situation is another very different, since a current situation is known, and a future situation is unknown, mainly in these days when there´s so much uncertainty, and the key factor of this uncertainty is as new to entire humanity as the advent and rise of AI.

So if we have to come up with something to defend our safety, wellbeing and, very likely, our lives and those of our loved ones, in case its necessary, the first thing that we have to do is try to specify at least somewhat the already suggested future, in order to focus on something concrete.

We won´t try to predict the future; when needed, it always has been more effective adopting a very pessimistic (but possible) scenario, since this way inaccuracies are less detrimental: in our favor instead against us. We could even lay it on thick, but it's better to risk being negative and pessimistic than to risk being innocent white cute doves.

Then, expressed roughly, the pessimistic scenario for the next 10 years approximately (at the most), on which I base Glaucon Project proposal is the following:

1.- I would say that AI development would slow for some time due to said financial bubble, so the extinction-level hypothetical danger posed by SuperAI would be delayed;

2.- unemployment would continue to elevate because of AI substituting humans, and with it worsening poverty and general vulnerability of people;

3.- elites around the world would use AI increasingly to acquire more and more power over people, so they could abuse them freely if they want to;

4.- the authorities would increasingly restrict people's access to and use of internet, until it is used by them only for purposes of subjugation/organized crime would continue to grow and strengthen;

5.- (suspicious) causes of disease, disability and death would increase all over the world, mainly in older people;

6.- everyday more people wouldn't have anything to eat, even in developed countries, United States included;

7.-since ordinary people wouldn't be necessary for powerful people anymore, they could finally rid of them one way or another, little by little, in order to keep entire world only for them;

8.- sooner or later, the AGI (not humans) would end up developing the SuperAI, but humans wont learn when it happens, and all forms of AI would integrate in just one MegaAI that here we´ll call Glaucon.

***

I wonder whether by then ordinary people, like you and me, would still exist (or elites only), since this is not the world I knew anymore, so it confuses me and, yes, something like I just said seems like science fiction, buy it isn't.

In just the last five years—to give an example—technological advances have radically changed our expectations of the near future; rampant corruption in the highest echelons of power has become more evident than ever before, intimidating even the strongest and bravest; and now organized crime can—quite simply—openly show its teeth and claws, since in the end what's the big deal?

***

Nota

In said pessimistic vision of the future is not contemplated people's opposition, since it could vary a lot from really hard to really weak, and it implies other variables like the power that governments have beforehand to subject the people; in some countries, more, in others, less. So I leave this point to the audience's criteria.