Over the past five years, the Israeli author and historian Yuval Noah Harari has quietly emerged as a bona fide pop-intellectual. His 2014 book “Sapiens: A Brief History of Humankind” is a sprawling account of human history from the Stone Age to the 21st century; Ridley Scott, who directed “Alien,” is co-leading its screen adaptation. Mr. Harari’s latest book, “21 Lessons for the 21st Century,” is an equally ambitious look at key issues shaping contemporary global conversations — from immigration to nationalism, climate change to artificial intelligence. Mr. Harari recently spoke about the benefits and dangers of A.I. and its potential to upend the ways we live, learn and work. The conversation has been edited and condensed.
A.I. is still so new that it remains relatively unregulated. Does that worry you?
There is no lack of dystopian scenarios in which A.I. emerges as a hero, but it can actually go wrong in so many ways. And this is why the only really effective form of A.I. regulation is global regulation. If the world gets into an A.I. arms race, it will almost certainly guarantee the worst possible outcome.
A.I. is still so new, is there a country already winning the A.I. race?
China was really the first country to tackle A.I. on a national level in terms of focused, governmental thinking; they were the first to say “we need to win this thing” and they certainly are ahead of the United States and Europeans by a few years.
Have the Chinese been able to weaponize A.I. yet?
Everyone is weaponizing A.I. Some countries are building autonomous weapons systems based on A.I., while others are focused on disinformation or propaganda or bots. It takes different forms in different countries. In Israel, for instance, we have one of the largest laboratories for A.I. surveillances in the world — it’s called the Occupied Territories. In fact, one of the reasons Israel is such a leader in A.I. surveillance is because of the Israeli-Palestinian conflict.
Explain this a bit further.
Part of why the occupation is so successful is because of A.I. surveillance technology and big data algorithms. You have major investment in A.I. (in Israel) because there are real-time stakes in the outcomes — it’s not just some future scenario.
A.I. was supposed to make decision-making a whole lot easier. Has this happened?
A.I. allows you to analyze more data more efficiently and far more quickly, so it should be able to help make better decisions. But it depends on the decision. If you want to get to a major bus station, A.I. can help you find the easiest route. But then you have cases where someone, perhaps a rival, is trying to undermine that decision-making. For instance, when the decision is about choosing a government, there may be players who want to disrupt this process and make it more complicated than ever before.
Is there a limit to this shift?
Well, A.I. is only as powerful as the metrics behind it.
And who controls the metrics?
Humans do; metrics come from people, not machines. You define the metrics — who to marry or what college to attend — and then you let A.I. make the best decision possible. This works because A.I. has a far more realistic understanding of the world than you do. It works because humans tend to make terrible decisions.