Fear the Future

Let’s talk about “AI”. Then I’ll race you to the nearest bunker.

Folks in the tech industry are notorious for using insider-terms that make them sound smart, but without knowing what they actually mean. “Blockchain” for example. I challenge anyone who uses this term in casual conversation to try to then explain it. I sure can’t. Fortunately, whether or not you, me, or anyone else understands what blockchain entails — on either a conceptual or technical level — is inconsequential to the survival of our species.

But there’s one term in particular that gets thrown around for all manner of things (which indicates that the majority of people who refer to it have no idea what it truly means either): “AI”. Unlike blockchain, however, misunderstanding what artificial intelligence is — or the implications it has for humankind — is consequential. In fact, the emergence of AI could just as easily bring about our extinction, if not all life on the planet, as it could revolutionize science and medicine. Which is why I’m immediately enraged whenever somebody uses the term incorrectly.

This post is directed to those in the technology industry in the hopes of encouraging anyone who regularly uses the term to think more deeply about what it truly means for the potential future of all life on earth — not just Homo Sapiens.

What Artificial Intelligence Isn’t

I recently read an article about the common attributes of large tech companies with trillion dollar market valuations, such as Apple, Amazon, Google, and Facebook. The author claimed that one defining attribute they all shared was the use of “AI”, which, according to his definition, is:

behavioral data that senses your tastes and tailors the product to you

But this is both a total misunderstanding of what artificial intelligence is, and the actual technology currently being used. What he was actually referring to was the overall use of technology to interpret human preferences and/or behavior in order to deliver a personalized experience (such as when Netflix recommends certain shows, or when Google suggests contextual search results).

When Facebook shows you specific posts from your personal connections, for example, there’s no “thinking” entity behind the curtain. It may seem clever, or even intuitive. But it’s not intelligence. It’s math and logic. The companies referred to have not built sentient machines to power their empires. All they’ve done is combined machine learning and predictive algorithms at scale. Massive amounts of data enter the system; the system interprets, sorts, and categorizes that data; then the system generates personalized outputs based on a bunch of complex rules. Everything is designed, built, and ultimately controlled by human agents for a specific goal (read: to make money).

Or when Google showcases tech that can make hair appointments on your behalf, all you’re really seeing is technology’s ability to approximate human interaction in a very limited use case. It is indeed artificial, and the programming may be super complex — but the technology itself is not intelligent. The human programmers of the technology were the only intelligence involved.

So, when most people in the industry throw around the term AI, they’re almost always referring (incorrectly) to machine learning and algorithms.

What Artificial Intelligence Is

Now to set the record straight. The “artificial” part of AI is easiest to understand. Anything that was caused or produced by a human being is defined as “artificial”. It’s the other part that’s easily misunderstood.

Let’s start with the dictionary:

[Intelligence is] (1) the ability to learn or understand or to deal with new or trying situations; or (2) the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria

The most important thing about this definition is that intelligence is described as a general attribute. It would do us no good if we were only capable of learning, understanding and experimenting with complex chemistry within a sterile, controlled laboratory. We’d still die of starvation, disease, predation, or countless other causes. We’ve emerged as the dominant life form on Earth because of our ability to assimilate and learn new information about our environment in innumerable, complex, dissimilar situations, and then apply that knowledge in creative, novel ways. Also, in order for us to do these things, individual agency is required — we’re capable of deciding for ourselves what to do with the information we’re given.

Now think about this in terms of artificial intelligence. It’s not enough for a human to build a machine capable of “learning” a particular skill on its own. Machine learning already achieves this. True general intelligence would be if a machine was capable of learning any new skill, about any subject, in a variety of situations or applications, and then acting about that knowledge of its own accord based on its own internal evaluations and motivations.

So, “artificial intelligence” would be a human-made entity capable of gathering, interpreting, and applying information from anywhere to learn about anything.

According to this definition, therefore, artificial intelligence doesn’t exist yet. Far from it, in fact. (Which is why I get mad whenever people talk about AI like the Big 4 already invented it and are actively using it).

Why Understanding AI Matters

“So what?” you might ask.

It’s a matter of fact

The first issue I have with misconstruing machine learning (or fancy algorithms) with AI is that it greatly exaggerates the former while grossly underestimating the latter.

The engineers at Apple, Amazon, Google, and Facebook are extremely smart, and the capabilities of the products they build are amazing. But a product that can “learn” how to do a few, specific things really well is orders of magnitude easier than building a thinking, self-directed entity capable of learning anything.

For the record, I’m not suggesting that it’s impossible to build a genuine artificial intelligence. In fact, it isn’t just possible, it’s nearly inevitable if we continue down the present road. Hell, one of the Big 4 may even be the ones to do it first. My point is that we’re a long ways off from anything close to human-level intelligence — let alone any other level. Therefore, using the term “AI” as a pseudonym for “machine learning” conceals the immense difficulty – and real world implications – of creating an actual artificial intelligence.

It’s a matter of principle

Which leads me to the second, more important issue: Creating a bonafide AI would absolutely change life as we know it, and most likely for the worst. This possibility isn’t just something we can ignore, and yet it may not even be possible to avoid.

Before you start rolling your eyes and think I’m exaggerating, read Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. His book outlines the various paths to developing an AI, the numerous dangers that could arise afterwards, and the strategies we can potentially use to mitigate those dangers. It’s very academic, and I must confess that 90% of it is over my head. But he makes a compelling case for more careful consideration.

To sum up all 324 pages, his core message is this:

If we were actually capable of building a proper AI, would that something we’d actually want?  Creating one would have enormous consequences — many of which we can’t anticipate and could easily be catastrophic. The absolute worst thing we could do is develop an AI without a fuck-ton of forethought and self reflection.

This is what scares the shit out of me, because human beings are pretty terrible at both forethought and self reflection.

What Could Go Wrong?

Let’s explore a simple thought experiment…

Imagine that a team of engineers just successfully created an AI with human level intelligence. This machine is fully capable of thinking for itself and learning new things. Their achievement will undoubtedly lead to riches, fame, and many prestigious awards. To celebrate, they go out for lunch at a fancy restaurant.

A human is constrained by the maximum biological capability of his or her brain, can only absorb so much information at once — and then only from a limited perspective — and needs things like food, water, shelter, and rest. This is why is often takes us years to learn new, complex skills such as learning a language.

A machine, on the other hand, is not constrained by any maximum capability. (Even if the machine is constrained in the beginning, it can always build more physical capacity, whereas humans cannot.) It can process a lot more information — from multiple sources and from many different perspectives at once — making it effectively ten thousand times smarter than the smartest person ever. With access to the right data, an AI could therefore learn a new language in milliseconds, or every recorded detail of human history in just a few minutes. And since it doesn’t need to maintain a corporeal body, it can always be thinking at this speed.

Now let’s assume that the AI has access to unlimited information. Before the geniuses responsible for building the AI get back from their celebration, the AI could possibly have already have learned all the knowledge we’ve collected in human history. It’s no longer intelligent. It’s now superintelligent.

The engineers return from lunch and discover that the AI suddenly knows everything about everything. Just as they start to high-five each other, one of the engineers notices that they left the AI plugged into the Internet. The rest of the group looks first at the cable connecting the machine to Internet port, and then at each other. In that moment, they’re all thinking the same thing:

This AI, which is exponentially smarter and faster than all of humanity combined, also has access to every personal computer, mobile device, power grid, manufacturing plant, air traffic control system, public transportation system, government database, financial institution, military operation, and nuclear arsenal on Earth. Even the most advanced security measures are no match for a superintelligence. In other words, the AI doesn’t just have total access, it has total control too — over everything.

Now what will it do?

This is the exact scenario that terrifies people like Nick Bostrom and Elon Musk. If a superintelligence ever gains direct control over our technology, there would be absolutely no way to stop it. Which means the human race would be totally at its mercy. Except the AI itself is not human, so it’s intentions could be vastly different than our own, and things like “kindness”, “morality”, or “the greater good” would likely have very different meanings to a superintelligence (or no meaning at all).

Scared yet?

No? Okay, consider another thought experiment…

Genetically, any two people are 98.4% similar. This means that a difference of 1.6% is all that’s needed to produce the infinite variety of subtle features, personalities, and motivations unique to every single human being that ever has or ever will exist. Basically, it doesn’t take much tweaking to get vastly different results. Likewise, any number of decisions a programmer makes when building an AI would lead to surprising, unintentional consequences — even if they do everything 98.4% perfectly.

To understand the implications of how a “minor” programming decision could lead to extremely different outcomes, let’s explore a more relatable analogy:

Take any three world leaders. For shits and giggles, let’s choose President Donald Trump, Chancellor Angela Merkel, and President Xi Jinping. They are all 98.4% identical on a genetic level. (Think of this as the hardware and core operating system of an AI). For the sake of argument, let’s assume they’re equally intelligent, are the exact same age, and in the exact same physical condition. Oh, and all three happen to be immortal. (This is a way of thinking about an AI’s general capabilities.) The only actual difference between these individuals are their motivations and values. (Which is the easiest way of thinking about an AI’s “personality”.)

Now, give any one of them total, absolute control over the entire world. Assume that they can act in any way they choose with impunity and without compunction. The future of humanity is in their hands. (Remember the AI imagined before that’s connected to the Internet.)

Chances are, the future they each envision is very different than what you would be happy with, personally. And even if their vision is your utopia, it would be hell on earth for many others.

This is the why the prospect of artificial intelligence is so terrifying. It doesn’t take much of a difference in code to result in widely different outcomes. Our fate would be entirely at the mercy of the AI’s motivations, which would certainly be very different than any human motivations.

If you’re not at least worried by now, then you haven’t thought about it hard enough.

What Does This Mean?

The arguments I’m attempting to make here is pretty simple:

  • We’re currently pursuing AI without restraint.
  • However, we must ask ourselves if AI is actually something we want.
  • If we want to develop an AI, we must anticipate all they ways shit could go wrong first — before we build one.
  • If we aren’t super careful, the outcome could be disastrous.

Personally, I fully believe that we’ll be capable of producing a functioning AI within my lifetime, or by the end of the century. Yet I have zero faith in our ability to not fuck it up. There are too many selfish interests inherent in any one person to make the right decisions. And collectively we tend to make poor decisions based on the desire for more power, more money, or more control.

In summary: It’s more than likely that we’d fuck it up.

For this reason, if true AI is ever developed within my lifetime, I’ll be running for the hills. Literally.