- Elon Musk – For being one of the few people in Silicon Valley, or in all of the US really, to think big enough. He earns the top spot just for saying that he hopes to die on Mars.
- Thomas Piketty – For providing this data-eager media climate a much-needed data infusion showing how the post-WWII period was a blip, and we have reverted to historical levels of inequality.
- Lawrence Lessig – For trying to tackle campaign finance reform through the Mayday Super PAC.
- Maria Popova – For her untiring work compiling the most interesting articles and links of the week in her Brain Pickings newsletter.
- Tim Ferriss – For being the guinea pig of experiments in self-improvement and bio-hacking, so that the rest of us don’t have to test everything on ourselves, but can just follow his shining example instead.
- Nick Bostrom – For starting the debate on how we should build an AI that will not destroy humankind in his brilliant book Superintelligence.
- Yuval Harari Noah – For eloquently and innovatively summarizing the rise of humans in his book Sapiens: A Brief History of Humankind.
- Peter Thiel – For investing in business that can create 10x improvements instead of incremental change, and for supporting potentially society-changing ideas such as Seasteading.
- Richard Linklater – For one of the most innovative movies of the last years in Boyhood.
- Max Tegmark – For his work on multiverses, for example in this year’s book Our Mathematical Universe.
Some of the top risks that featured in the debate this year:
- Artificial Intelligence and superhuman machine intelligence – A trio of impressive people have this year highlighted the threat of “too smart” AI in the long run. Elon Musk said that we are summoning the demon by developing AI, Stephen Hawking cautioned against its unrestrained development, and Nick Bostrom, in his outstanding book Superintelligence, laid out the case why most scenarios for AI all lead to human extinction because of the ways the intelligence will think.
- Killer Asteroids – Several projects have been started to scan for killer asteroids that might wipe humanity out before AI gets to do it
- Climate Change inaction – If 1 and 2 don’t do humanity in, we can always rely on climate change to deliver. Stunning scenarios were delivered this year of how the world will look as we blow part 2 degree warming. In some ways, it feels like this was the year when we finally started to see a sea (pun intented) change in public sentiment. But the political constraints will ensure that doesn’t mean any meaningful action, at least for now.
- Rise of Nonlinear Terrorism – ISIS has clearly showed this year how unpredictable non-state actors can be, and how impactful and lethal.
- EU Deflation – The EU continues to balance on the knife’s edge of falling into deflation, further cementing its decline in importance as a region
- The Chinese debt crisis – This continues to be the dark horse, the elephant in the room, that is only mentioned in whispers. May or may not cause a rerun of the financial crisis.
- A Thirty-Year Middle East Sunni-Shia conflict – A real danger is that ISIS is just the most visible piece of a the century-long brewing Sunni-Shia conflict, and that we’ll eventually see a Saudi-Iran conflict.
- Oil at $50 (mostly if you’re Russia/Venezuela/Iraq) – Most of the world is cheering the falling oil prices, but it also has negative effects, and not just for the world’s petrodictatorships. If US shale producers get squeezed out, the Gulf states regain their power to dictate the terms of the world economy.
- South China Sea – The rise of a Sino-centric Asia continues to be most prominently reflected in the fears of a conflict starting from an incident in the South China Sea, given the multitude of interested parties and the unregulated waters.
- Climate conflicts – As we wait longer in tackling climate change, the danger grows of climate-driven conflicts, over water, food, arable land, and other commodities that will become scarce in the decades to come.
Lucy Kellaway wrote an article a while back, poking fun of McKinsey Institute’s new set of long-term forecasts, looking 50 years into the future. As part of her brilliant takedown of the report, she makes the very astute observation that the trends that MGI identify are not trends of the future, they are trends of the present. It’s become more clear lately just how hard it is for people to forecast significant change. We can see linear change, but as soon as the curve is not linear, but instead exponential or broken, our foresight breaks down. We are ok with change, we even like it, as long as it is nice incremental change, and not “superchange”. Our brains are programmed to enjoy inertia and protest if things seem too foreign.
I’ve lately been enjoying Nick Bostrom’s Superintelligence. He seems to be one of the few people who is comfortable with the idea of superchange. In the book, he has a chart showing the foreseen outcomes of superhuman machine intelligence by experts in the field. Even among these people, who are the most knowledgeable in the field, a majority of them think super human machine intelligence will most likely have moderately good outcomes. Only a very small minority (<10%), foresee it to have extremely negative outcomes. This feels like an extremely short-sighted assumption.
It used to be the case that we could learn of the future from looking at the past. Now it seem this is no longer the case, since today’s world might in fact be more complex and non-linear than times past. However, even if we can’t learn about the content of the future, we can surely learn about the speed and magnitude of change. It is an undeniable fact that someone looking 20 years forward in 1994 would not be able to foresee the things we take for granted today. This goes from obvious aspects, such as the powerful computers in our pockets that we know as cell phones, to the inescapability of climat change. It is therefore extremely presumptious of us to assume that we can forecast 2034 with anything remotely approaching certainty. It seems to a statistical impossibility that we would not experience superchange at the same rate as the past. It is actually even more likely than in the past, given the combinatorial aspects of inventions, as outlined by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age.
We have developed tools to forecast the future and imagine change based on current irreversible trends, but now we need to invent the tools to imagine superchange. Otherwise, we are proceeding blindly into the future.