We need strategic optimism for building a better future
Almost 20 years ago, I had the honor of writing a few articles for Kevin Kelly, founding executive editor of the iconic Wired magazine and author of several bestsellers, among which ‘What Technology Wants’ and ‘The Inevitable’. When I recently ran into him at the EBIT conference in Riga, where we both gave a keynote speech, I realized that was the perfect opportunity to interview him for this ‘Never Normal’ newsletter.
One of the things I appreciate most about Kevin - besides his unique vision of technology and its impact on human behavior and society - is his unwavering positivity. This is a man who believes that our situation is getting better, but that most of us are looking too closely to be able to perceive our progress over time.
Though it’s perhaps not surprising that Kevin is not one of those dystopian thinkers who would have us believe that everything is breaking down and the future is only going to get worse, he also not a fan of the idea of a utopia.
“A vision of this perfected, glorious, stain free utopian future where all of our problems will be minimized and everything else is optimized is not only unrealistic, it is also undesirable", he told me. "That kind of harmony is deadly. It would result in a kind of stasis where you don't have any room for change or growth or betterment, and that’s what we really want to avoid.”
“I think a better model to aim for is my concept of protopia”, he suggested. “Where the “pro” comes from the Latin prefix meaning “forward, forth, toward the front” or “in favor of”, as in “pro versus con”, pro-ceed, pro-totype. The idea behind protopia is that we are moving, creeping forward very slowly, by accumulating many tiny incremental changes over time, which in themselves are almost imperceptible. Their real compounded impact can only be perceived by looking backwards, with a long view, and that’s probably why so many seem to believe that we are moving towards a dystopia instead.”
Kevin’s protopia idea does not mean that he believes that humanity’s challenges will be over soon. To the contrary, he sees huge and amazingly hard problems coming our way. But he does expect that we will be able to improve the world in tiny steps and that our overall situation will become significantly better over the long haul.
“We don't really see glorious things happening right now, but you also know that you wouldn’t want to have lived 200 years or more ago. Hunter gatherers following banker hours, as some would have us believe, is a myth. In general, most of our ancestors were on the edge of starvation. They were always hungry and just one bad year could kill them off. So, looking at the past with a long view forces you to acknowledge the reality of progress, which should be an absolute cause for optimism.”
Choosing what kind of AI world we want to live in
Kevin believes in the strategic importance of positivity and optimism for creating a better future. “I think it's really hard to build a great future without being able to envision it first. I just can't believe we can develop it accidentally or inadvertently. What we're lacking in general, for instance, is a picture of a world full of AI that we want to live in. We can imagine it going wrong, and we do, because that’s just a lot easier. So as things become more complicated, the good scenarios become a lot more improbable. And that's what we're headed to: an improbable future.”
“When I talk or write about certain evolutions being inevitable (note: he is referring to his book “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future”) it doesn't mean that they're probable. They truly are not. But they are inevitable in the sense that we're constrained to certain forms, abiding by the laws of physics and chemistry. We cannot make everything that we can imagine. Only certain channels are physically possible. And so that's why some things are inevitable. But they're also highly improbable at the same time. Life on Earth is, in a certain sense, an improbable state, although at the same time, it might be quite common and maybe even inevitable for planets in the right Goldilocks conditions, like ours.”
Considering this, Kevin believes that we have the power to choose what kind of AI we’ll want in the future, even though we did not necessarily get to choose whether AI surfaced or not in the past. But it is up to us to decide which character it will have, who owns it, how it's regulated or whether it’s open or closed or transparent or not: “As we move forward, it’s inevitable that we will have artificial minds”, is how he put it. “It's just built into the very nature of our technological system, what I call the Technium. But we will need to decide what kinds of artificial minds we want.”
The myth of AGI
Perhaps surprisingly, Kevin feels that artificial general intelligence (AGI) is a myth. “The large language models (LLMs) are very specific and will only become more so. Some versions specialize in text, others in images, video or translations. We’re not evolving towards a general intelligence at all. We will keep adding capabilities, of course, but it’s also important to realize that if you optimize something in one dimension, there will always be a trade-off in another. That's a classic engineering dilemma. We could collect the different types of AI in an ecosystem - like some kind of very specific all-star team, but for technology - but that too will come with trade-offs. So, I think that the idea that we're moving to the general in AI is an illusion.”
“It’s also true that we are building a superintelligence at a planetary scale, by hooking all the computers of the world together. All these AIs together can make something that operates at a scale that's way beyond our intelligence, but, again, there will be trade-offs for that too. It may be that it doesn't work as fast. It may not be as nimble as a tiny AI. It may not be as adaptable. But there will be compensation somewhere else.”
That’s why Kevin believes that both AGI and ASI are romantic science fiction fantasies. And that the all-encompassing AI will never usurp humanity. The good news here is that Kevin’s type of optimism can be acquired. Child psychology research has uncovered that it would be a teachable skill rather than a personality trait.
“It’s called learned optimism. You need to teach children that setbacks are only temporary. That it's not their identity, or their fate. It does no good to believe that you’re an unlucky person and that nothing ever goes your way.”
Just like him, I’m a realist in believing that we have some massive societal, environmental, biological and geopolitical challenges coming our way. But we both also believe that optimists will be the ones building the future. Because rather than fearing it, they can envision what we need to get to a place that’s perhaps only slightly better than this one, but that all those “slightlies” will add up into something much greater over time. We just need the right people to drive these opportunities of the Never Normal.
This interview first appeared in Peter's newsletter.
Click here to read that and more.