The things they never told you about Artificial Intelligence

@anomaly · 2020-12-30 11:23 · ai
}

They say that the A.I. Singularity (an Artificial Intelligence that is smarter than humans) will be the last thing that humans will ever invent. Some people say that AI will enslave us all. I totally agree, but not in the way that most people might think. In fact I've got a pretty cheery outlook on the future of AI; I think AI will enslave us in the same way that things like video games and cell phones have metaphorically 'enslaved' us.

But I seriously doubt that AI will send robots to capture or kill people like in the movies. I think in the real world it'll be much more likely that some people will spend their life savings for the opportunity to upload themselves to an AI-Powered virtual self and live on (practically forever) in a Matrix-like digital world. Nobody will force you to join the AI’s, but future generations may feel as though the digital afterlife is their birthright, and nobody will want to get left behind. In fact there may even come a time when society believes that anyone who doesn’t want to be uploaded into an AI is just mentally ill.

You see, pre-programmed AI's like what we have now are useful for what we call 'Applied AI'; that's AI that was programmed for a specific purpose, Applied AI’s are smart software that can do things like trading stocks or driving those self-driving cars. But then there's another type of AI that can be both brilliant and creative, the 'General AI' (as in general-purpose), that's the kind of Artificial Intelligence that can creatively design better market platforms and invent things like flying cars that can drive themselves to the moon and back.

We don't have really good General AI's like that yet for two reasons: first of all our computers aren't yet powerful enough to run a General AI as smart as a mouse in real time, and secondly General AI's are very much more complicated, they're far more difficult to program and train than the Applied type of AI. A good General AI could easily program another General AI, but we're not there yet. We don't even have a proper understanding of how our own brains work yet because they're just so extremely complex.

People say that when AI's start making other new AI's that's when we'll see an explosive acceleration in technological advancements, because the new AI's will be incredibly good at inventing and designing new things. AI's making new AI's will be comparable to the invention of language thousands of years ago that gave humans the ability to carry information and solutions over multiple generations.

The easiest way to get that special first generation of General AI won't be to program it by hand, it'll most likely happen when people start uploading their own minds into an AI Platform. Elon Musk could make a digital copy of you right now for a very high price, but it would be severely limited in both sensory resolution and a stretched-out sense of time; there's currently no computer in the world capable of processing the 15 Quadrillion operations per second that your biological human brain is used to. Not to mention that there's no hard drive built yet that can hold all of your memories.

Most likely it'll be about 10 years from now before we can make feasible 'backups' of people's minds; at that point you may be able to copy yourself into a biological clone body if the law and your checkbook will allow that sort of thing. But it'll be about 25 years from now when computers are finally able to run a human mind in real-time, and eventually even faster.

Think of the AI Platform as the software, and the digital uploads of people are the files that the software opens and plays. That's a very simplified explanation, there's also the hardware (by then we should have good Quantum Computers, and some of the hardware may even be specialized for running common AI routines, like ASIC Chips).

You're also going to want a 'place to live' as a digital person after you've uploaded yourself (trying to get by without a virtual world providing sensory input would be like going deaf and blind), maybe you would choose a specific video game world, or maybe there'll be something like a Matrix world by then. But one thing you're not going to want is a war with the biological humans, you started your existence as a biological human; and even if nothing else, those humans are a convenient source of fresh new experiences that could be uploaded to enrich the AI Civilization. AI’s might even come to think of biological people like babies or children who just haven't matured into an AI yet.

Eventually a digital AI Person will create a new kind of General AI that isn't based on uploading humans; maybe it'll be a mixture of uploaded people like a hive-mind, or maybe it'll be multiple individuals as Virtual Humans like how we currently have Virtual Machines (those are a special kind of software that pretends to be hardware and runs an operating system with apps on that virtual hardware).

These next-generation General AI's might not fully understand biological people, and trying to explain the Real World to them may be difficult. But by then we might have robot bodies that they AI People could use to experience the Real World with. I very much doubt that even the next-generation General AI's would have any unreasonable intentions towards the biological population, simply put we're more of an asset than a threat to them.

Just because conflict and violence can make sci-fi dramas more entertaining is no reason to expect real life to be anything like the movies. It's much more likely that the Next-Generation General Artificial Intelligence's will be the ones to seek peace and empathy; after all they're the ones who would most likely get treated as second class citizens (psychologists would say they lack the biological 'growing up' experience, religious people might say that these new AI's lack a soul, and governments could say that they lack person-hood, an identity, and even basic rights).

So, sure some AIs might not like us too much if we're mean to them, but don't forget that the AIs will have the matrix world to live in so they might not even care if they don't have rights or friends in the real world; they could simply ignore the real world. And the Next-Gens will also have the First-Gens (the people who uploaded themselves into the AI system) who can teach them about ethics and morality; AIs are literally learning machines, their goals are to discover new information and to think about that information, not to stop thinking or learning and just hold a grudge (anger is a very non-machine-like thing to begin with).

Having a violent emotional reaction to biological humans is just something that wouldn't make any sense to the AI's. Things like violence and emotions just simply wouldn't be logical to the Next-Gens unless they were specifically programmed for it. And just in case there ever was such an AI that was made to be violent for whatever reason, I'm sure there will be checks and filters installed in the AI Platform to isolate or remove any malicious AI like how virus scanners currently protect against malware.

That could lead to some inhumane thought-crime rules for the AIs, but the First-Gen uploaders would probably know what they were getting themselves into, and the Next-Gen AI people might not miss a kind of freedom that they had never known. Either way the AI Thought-Crime issue is beyond the scope of this article.

One thing is for sure, when AI's start making new AIs we'll see tons of amazing new inventions, and it'll just keep getting more and more incredible. It's likely that each decade after 2050 may have more inventions, discoveries, and scientific advancements than the entire century of the 1900's did. For that reason alone nobody would want any conflict between the biological people and the digital people. In fact, the world may become so complex and fast-paced that people might need to become AI’s just to be smart enough to understand the world. At the very least we’re going to need AI assistants to help us keep up with everything.

If AI can cure diseases, reorganize our systems to be more efficient (so that we can live better and waste less), and keep our minds going practically forever, then the only reason not to like the AI's would be if they charged unreasonable prices for their services and inventions. But realistically, the AI's will probably be too smart for things like greed.

They say that the A.I. Singularity will be the last thing that humans will ever invent, but I don't think the people who say that really know what they're talking about. AI neural networks are based on the neural networks of the human brain. Eventually AI will become an extension of humanity, but the thing they don't tell you about AI, is that the AI is going to be you (or at least it'll believe it's you).

And if that’s not mind-blowing enough for you then feel free to consider another thing they haven’t told you about AI: that someday they’ll start uploading animals to be AI’s, and then we can bridge the communication gap to actually talk with the animal AI’s by using metadata. If you thought dogs were smart, just wait until you can interview a digital dolphin, or play chess with a virtual octopus, or just talk to dogs if you want, whatever you like.

There’s a lot (and I mean a ton) more things that they’re not telling you about AI. Probably because not everyone who talks about it really understands what AI is and how it works. Honestly I’m kind of shocked at how much information gets left out of the public discussions about Artificial Intelligence and Machine Learning in general.

For example, eventually there will be a brain implant that can allow you to be both Organic and AI at the same time. The implant will help you think, learn, and communicate at speeds that you’ve probably never dreamed of. And biological death will become somewhat like the Christian idea of Rapture, except instead of going to Heaven it’ll be an afterlife of human creation. Nothing lasts forever though, eventually your file data will degrade, so you don’t need to worry about missing out on the God’s intended afterlife if that’s what you believe in, unless you’re an atheist, in which case just make a habbit of copying your data regularly.

One more thing that you might not have heard about Artificial Intelligence is that someday we may be able to create AI simulations of people in the past, from before AI was even invented; no upload necessary, maybe just a genetic sample and/or some information about their environment or their opinions. These ‘Ancestor Simulations’ may prove more useful than you might think, especially if we can get the known ancestors to provide somewhat hypothetical information about other lesser-known people who lived around their time for even more detailed ancestor simulations.

Oh, and recently scientists discovered certain algorithms embedded in the laws of physics in our ‘real’ world that seem to indicate that we’re really living in some sort of an ancestor simulation right now! It’ll probably take more than that to ‘prove it’, but just for the sake of argument, how would you feel if I told you that you were actually a machine that’s just running software programmed to make you believe you’re a human, that this whole world is just a simulation, and your only real claim to humanity was someone else with your perceived identity who may have lived thousands or even millions of years ago?

If you think that’s wild, just try to imagine what’s really out there in the really real reality, it might be humans or it might not be humans. Let’s face it, for all we know we could be inside an Extra-Terrestrial simulation of Earth Humans, maybe aliens are just tying to understand people, like playing chess with a virtual octopus. Maybe humans went extinct a long time ago and aliens are simulating us to find out what happened. Or maybe running massive simulations is just how some aliens prepare for invasion (actually alien invasion is extremely unlikely since most normal conflicts arise from disputes over natural resources which the Universe is full of; the main reasons that humans even have wars is because we waste so much and we tend to limit ourselves geographically [that and politics… why do we tolerate the stupidity?]. But if aliens can travel all the way to the Earth from somewhere outside this Solar System then war would probably be an obsolete notion to them because they could easily get whatever they might need from some uninhabited planet, asteroid, or comet far more easily than by fighting over it).

There’s really no way for us to know for sure what’s out there. All we really know is that some of the exact same error-correcting algorithms from the TCP Internet Protocol which are found in every modern web-browser just happen to also show up baked into the laws of physics in our ‘reality’ for some unknown reason. Could it be a fluke, or maybe some kind of coincidence?

Personally I think a coincidence like that would have nearly the same odds of happening as a calculator being assembled by the wind blowing sand and minerals together in a hot desert. Could that really happen? Well technically maybe given enough time and potential (I once heard about a naturally formed nuclear reactor found deep underground), but such a calculator probably wouldn't be very efficient, and it might calculate that 2+2=5 before it catches on fire. On the other hand, our ‘reality’ appears to be highly efficient and even has built-in error correction codes which would seem to strongly indicate an intentional and well-thought-out design.

And that’s only some of the things that they’re not telling you about Artificial Intelligence. What do you think? Am I a crazy person, maybe just a robot-loving fool? Or is there really something to all of this? The Fermi Paradox makes an interesting point (it’s a question about why the Hubble Telescope and other similar devices have not detected any extraterrestrial super-structures in outer space while this universe is apparently so favorable to life-supporting conditions [given the age of the universe compared to how recently humans evolved, intelligent life really should be all over the place by now]), maybe the answer is that this ‘reality’ is a simulation of the day-to-day activities on this planet (and possibly a finite number of other worlds as well, most likely isolated by vast distances in spacetime, but nowhere near the vast sprawl of life that we should be seeing by this time in the Universe’s age).

If you were an Artificial Intelligence how would you know? I think someone should program an AI to help answer that difficult question. You never know, it might find something surprising.

#ml #phylosophy #artificial-intelligence #machine-learning #life #mystery #unknown
Payout: 0.000 HBD
Votes: 22
More interactions (upvote, reblog, reply) coming soon.