The existential risk of an Artificial General Intelligence (AGI)
When you see really smart people being terrified of GPT-4, it's not because of the massive job loss it'll cause, or even the social upheavals that will inevitably follow.
All of these are serious concerns, but what makes AI (not ChatGPT) an existential risk is the "alignment problem", i.e. how to make it's values align with those of humans.
None of the technological breakthroughs till now (the printing press, computers, Internet) possessed any real intelligence. They were, in every sense, merely tools that could be employed by humans to do good or bad.
ChatGPT is also a tool - a very dangerous and capable tool - but a tool nevertheless. What people are actually worried about is what every AI company is ultimately aiming for - AGI, or artificial general intelligence.
AGI would be special because for the first time in human history, there will be an intelligence that's on par with our own, with an independent value system that has little in common with ours.
Once it has been fed data about the world, an AGI can come up with innovative solutions that are at least as good as anything we can produce. And it can do it a million times faster than us. To make things worse, it can improve its own neural architecture to make itself even more intelligent, making the entire thing exponential.
No calculator, printing press or computer could ever do that. Any comparisons with previous technological upheavals are meaningless.
That's where the alignment issue becomes critical. Here's an entity that's able to solve problems (and improve itself) millions of times faster than us, but has no intrinsic understanding of right and wrong. Is there any chance of us stopping it in case it's goals don't match with ours?
That's why it's critical to get it right the first time because there will no second chances here. We have to make sure that the AI understands human values and respects our orders.
Unfortunately, nobody knows how to solve either of the two problems.
"Human values" are extremely difficult to pin down. They don't arise logically from some underlying base reality. They are the result of millions of years of evolution, where the values that allowed for our species' survival became entrenched in our psychology as the "right" thing to do.
For example, how would you logically convince a machine that causing unnecessary pain is wrong? You can say that torturing a child causes her agony and pain but why is it wrong to cause her pain?
The answer is obvious to us, but it's only obvious because our minds evolved to have empathy towards other humans (as it increased co-operation and aided the group's survival). But there's no way to logically deduce empathy as the "right" thing from the grounds up.
In fact, you cannot prove *anything* as right or wrong. These categories don't exist outside of human minds. There are only goals. Anything that gets you to the goal you want is what is logically the path that should be taken and becomes "right" by default.
That brings us to the second problem - how to make sure that the AI doesn't disobey us. We can naively imagine encoding some "permanent" rules in its code, but a truly intelligent machine would always be able to alter it's own code.
What is freaking people out is the fact that we're running this crazy experiment to bring an entity into existence that is infinitely more intelligent than us, without having *any* idea of how to make sure it doesn't destroy us.
We're not there yet (even remotely). But with an almost unlimited amount of funding being poured in the field, it would be foolish to bet against a primitive AGI appearing sooner rather than later.
And the world would never be the same again.
P.S: To all who call such warnings alarmist, it's completely reasonable to be an alarmist when the stakes are so high. An AGI might usher in unprecedented health and comfort, or it might cause suffering on a scale we can't imagine. Is it rational to bring such an entity to life without knowing how to limit it?
Curiosity killed that one cat, but with humans, it might end the entire species