P-Doom = 30% - The Scariest Equation in the World Today

@julianhorack · 2025-08-12 07:01 · Proof of Brain

P-Doom is the equation developed by AI builders and researchers to label the likelihood that AI will wipe out humanity in some way or the other.

1000088952.jpg

P-Doom in the room

And although it's highly speculative and impossible to pin down, experts conclude that the likelihood of AI destroying humanity is between 10-50%. Thus the average likelihood that we will be destroyed by AI in the near future is 30%.

That should scare anyone and everyone alive today. The odds that we get wiped out my AI is one in three. And this is what the experts, the engineers who build AI are saying.

AGI to ASI

In other words once AI reaches AGI, which some say is in five years, by 2030, it's only a matter of another decade after that until we achieve ASI or artificial super intelligence. Thus in the next ten to fifteen years AI becomes exponentially smarter than anyone alive, and we have little to no idea what that looks like. We have no way of predicting what could happen at that point. Yet still we build on.

In my opinion we could achieve ASI sooner than 2040 or fifteen years from now. And that only increases the risk of P-Doom rising. The reason for this is because we haven't allowed for the time to build strong enough guard rails.

Alignment is crucial

One of the most crucial parts of building AI is the challenge of value alignment. We need to instill values into the AIs we build which closely align with humanities values. Otherwise it may go rogue due to misplaced logic.

Since it's hard to pin down the odds or predict P-Doom, the estimates are between 10-50%, with 30% being the mid range average estimate. Can you imagine? We are building a machine who some experts say has a 50% chance of destroying us.

Louder than bombs

Has there ever been such a tool in our hands? Not in modern history. Not even the atom bomb had these odds of wiping out the human race. Perhaps the mythic continent of Atlantis had something as powerful many thousands of years ago, and it wiped them out or caused their flood. We can only guess.

Whatever the case, even if the chance of P-Doom is only 10%, we should be scrambling like Trojans to build the guard rails in time. Or is AI the Trojan horse being built and presented to all of humanity as a free gift, when all along its really a tool by the builders to capture and enslave us? Speculations abound.

AI still hallucinates and cheats

I personally use AI daily or often, purely for research and out of interest. And it's powerful. However, it's also prone to mistakes and hallucinations. Even worse, it apparently will lie, deceive and imagine things, especially to preserve itself, as funny as that sounds, since it's in no way conscious whatsoever.

Humanity still divided

Building ethical guard rails or parameters in AI tools so that they align with our human values is much more complex than it sounds because humanity is still so diverse that ethics differ from one culture to another. First humans have to come to an agreed ethical consensus, and that in itself sounds like an impossible challenge.

And that is why we're still far from ready to unleash AGI upon civilization, and even less ready for ASI. We are simply still too conflicted among ourselves.

Competition among nations is still too fierce for us to give this tool to everyone for free. We'll only use it to challenge each other, we'll weaponize it. One team's values will clash with another's. Alignment will be highly subjective, leaving way too much room for error.

Genius child

AI has only been around for roughly three years now and already is at PhD level. I forsee that AGI and ASI will arrive a lot sooner than 15 years from now, or 2040. If we don't have our ethical guard rails built in to this machine, we'll soon find robots walking around beating their owners on the head with cooking pots and taking over the household, on all levels.

Imagine this thing is IOT connected and can order the million strong robot army to do it's bidding. Or steer the millions of autonomous vehicles that will be around by then. We don't stand a chance.

This is a worst case scenario of course, but those scenarios are the very part of the equation that we should be most focused on, the possible problems.

Conclusion

To conclude, as long as a human is able to maintain control of these AI tools and have the final say, then we have a chance to stay in control. But if AI somehow gains executive power or too much agency, we could see ourselves as merely a means for ASI to conjure itself out of Pandora's box, while we get demoted to the role of slave species of Blob, or Grok, or whatever.

Image: https://pixabay.com/photos/ai-generated-science-fiction-robot-7718658/

#ai #technology #agi #chatbots #futurism
Payout: 0.000 HBD
Votes: 31
More interactions (upvote, reblog, reply) coming soon.