Vedant Misra

Founder/CEO at an AI startup called Kemvi. I'm interested in artificial intelligence, consciousness, rationality, neuroscience, markets, and Mexican food.

Distractions

Nick Bostrom is a philosopher at Oxford. He wrote Superintelligence, a book about how things could go wrong on the path to human-level AI.

One core idea in the book and in other AI Safety research is that it will b difficult to teach human preferences to an AI. We underestimate the complexity of human preferences because we're only ever communicating about them with other humans, all of whom have basic human preferences in common with us.

When we need to teach a superintelligent system what humans want, we'll find that it's hard.

Imagine we've developed superintelligence, and we need to give it a goal. It seems safe enough to have it maximize happiness for all humans.

But there are many ways that can go wrong. This is a The Matrix-style outcome. The AI could just knock us all out and put us all in blissful simulations. Billions of humans fed through tubes, trapped in simulated reality, capable of experiencing literally anything they want. This both achieves the goal and also isn't what we wanted. This is referred to as the problem of perverse instantiation.

The thing we find so repulsive about the brains-in-a-vat scenario is that we know none of it is real. But why is it so repulsive when it isn't all that different from the world we're voluntarily creating for ourselves?

First, recognize that there seems to be no limit to the degree to which humans seek novel distractions. Take a trip to GameStop, or pull up YouTube. Millions of humans spend thousands of hours, day after day, generating new ways for people to spend their attention. Every month, enough new games, films, and shows are released to keep you engrossed for the rest of your life. Overheard at GameStop recently:

Have you seen this new PlayStation exclusive? This game looks great...I really want a PS4 now. I wish I were a millionaire, I'd buy all of these games."

Not only is there limitless content for us to consume, there's also limitless content about other content---gameplay videos, movie reviews, people talking about and parodying TV shows. Not to mention books, articles, articles about books, books about articles...

Of course some of this is wholesome, but it's difficult to determine what is wholesome and what isn't wholesome. If you can regulate yourself well enough, you'll be all right. But the problem is, it will be harder in the future to regulate yourself. Not only is content getting better---there is more and more of exactly what you want to watch/read/listen to---but we're also getting more free time.

Our insatiable desire for novel methods of escaping reality is colliding with our ability to generate more content that better matches what people want.

For now, the proliferation of hyperspecific content is due to the the massive number of people who can now easily generate and distribute their thoughts, thanks to the massive growth in bandwidth, the drop in cost of technology, and the explosion in how many people can access the internet.

But this gets a bit scarier when you consider that we're starting to build machines that can generate quality content. Check out Google's Project Magenta, for example. This is just the beginning.

Machines will grant us a massive amount of free time, as well as ways to spend that time. We're rapidly approaching a world in which we can have algorithms generate basically whatever the hell we want and fritter away our lives doing both (a) exactly what we want and (b) nothing at all of consequence.

Did you enjoy the Harry Potter series? Have the AI procedurally generate a version of it set in Beijing, or New York. Do you enjoy listening to EDM music? Have the AI generate a hundred different two-hour mixes, all as good as the best trance DJ.

All of the content we consume is just stuff that humans made to occupy the attention of other humans. Attention is the very currency of life, and yet it's remarkably normal that we allocate much of it to seeking out distractions.

By and large, we're okay with this. Even when you're binge watching some TV show, the fact that it was written by people, directed by people, and acted in by other people, gives it to retain a shred of meaning. It took a large group of people a lot of time and energy to produce that show.

But does content remain meaningful if we can get machines to generate all of it? I'd venture to say the answer is no. Blissfully and distractedly, we're inching toward a slippery slope. The space of possible distractions is infinite, and it's becoming easier and easier to fritter away our lives chasing distraction after distraction. Content is opium, and we're all junkies.

Remember the guy I overheard at GameStop?

"I wish I were a millionaire, I'd get all of these games."

I imagine turning to him to say "You know, the irony is, the way to become a millionaire is to stop playing games." We both pause to look around at the other junkies in the store, trading cash for distraction. He laughs, I laugh, we pay for our videogames, and go on our way.