An #AI Just Embarrassed the US #Air_Force in a Virtual #Dogfight
Can parliaments and political parties overcome these challenges and forestall the darker scenarios? At the current moment this does not seem likely. Technological disruption is not even a leading item on the political agenda. During the 2016 U.S. presidential race, the main reference to disruptive technology concerned Hillary Clinton’s email debacle, and despite all the talk about job loss, neither candidate directly addressed the potential impact of automation. Donald Trump warned voters that Mexicans would take their jobs, and that the U.S. should therefore build a wall on its southern border. He never warned voters that algorithms would take their jobs, nor did he suggest building a firewall around California.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari #politics #clinton #trump #wall #california #firewall
New technologies will continue to emerge, of course, and some of them may encourage the distribution rather than the concentration of information and power. Blockchain technology, and the use of cryptocurrencies enabled by it, is currently touted as a possible counterweight to centralized power. But blockchain technology is still in the embryonic stage, and we don’t yet know whether it will indeed counterbalance the centralizing tendencies of AI. Remember that the Internet, too, was hyped in its early days as a libertarian panacea that would free people from all centralized systems—but is now poised to make centralized authority more powerful than ever.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari #blockchain #distribution #concentration #power #cryptocurrencies
The biggest and most frightening impact of the AI revolution might be on the relative efficiency of democracies and dictatorships. Historically, autocracies have faced crippling handicaps in regard to innovation and economic growth. In the late 20th century, democracies usually outperformed dictatorships, because they were far better at processing information. We tend to think about the conflict between democracy and dictatorship as a conflict between two different ethical systems, but it is actually a conflict between two different data-processing systems. Democracy distributes the power to process information and make decisions among many people and institutions, whereas dictatorship concentrates information and power in one place. Given 20th-century technology, it was inefficient to concentrate too much information and power in one place. Nobody had the ability to process all available information fast enough and make the right decisions. This is one reason the Soviet Union made far worse decisions than the United States, and why the Soviet economy lagged far behind the American economy.
However, artificial intelligence may soon swing the pendulum in the opposite direction. AI makes it possible to process enormous amounts of information centrally. In fact, it might make centralized systems far more efficient than diffuse systems, because machine learning works better when the machine has more information to analyze. If you disregard all privacy concerns and concentrate all the information relating to a billion people in one database, you’ll wind up with much better algorithms than if you respect individual privacy and have in your database only partial information on a million people. An authoritarian government that orders all its citizens to have their DNA sequenced and to share their medical data with some central authority would gain an immense advantage in genetics and medical research over societies in which medical data are strictly private. The main handicap of authoritarian regimes in the 20th century — the desire to concentrate all information and power in one place — may become their decisive advantage in the 21st century.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari
and, if it sometimes fail, it also may lead some people, or in some cases, the whole country to a disaster. because you have single point of failure. and single point is not guaranteed to not make mistakes. just not all mistakes will be vital for all. some will be vital for some. some will be vital for all.
Imagine, for instance, that the current regime in North Korea gained a more advanced version of this sort of technology in the future. North Koreans might be required to wear a biometric bracelet that monitors everything they do and say, as well as their blood pressure and brain activity. Using the growing understanding of the human brain and drawing on the immense powers of machine learning, the North Korean government might eventually be able to gauge what each and every citizen is thinking at each and every moment. If a North Korean looked at a picture of Kim Jong Un and the biometric sensors picked up telltale signs of anger (higher blood pressure, increased activity in the amygdala), that person could be in the gulag the next day.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari
The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari
In one incident in October 2017, a Palestinian laborer posted to his private Facebook account a picture of himself in his workplace, alongside a bulldozer. Adjacent to the image he wrote, “Good morning!” A Facebook translation algorithm made a small error when transliterating the Arabic letters. Instead of Ysabechhum (which means “Good morning”), the algorithm identified the letters as Ydbachhum (which means “Hurt them”). Suspecting that the man might be a terrorist intending to use a bulldozer to run people over, Israeli security forces swiftly arrested him. They released him after they realized that the algorithm had made a mistake. Even so, the offending Facebook post was taken down—you can never be too careful. What Palestinians are experiencing today in the West Bank may be just a primitive preview of what billions of people will eventually experience all over the planet.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari
The same technologies that might make billions of people economically irrelevant might also make them easier to monitor and control.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #harari
On December 6, 2017, another crucial milestone was reached when Google’s AlphaZero program defeated the Stockfish 8 program. Stockfish 8 had won a world computer chess championship in 2016. It had access to centuries of accumulated human experience in chess, as well as decades of computer experience. By contrast, AlphaZero had not been taught any chess strategies by its human creators—not even standard openings. Rather, it used the latest machine-learning principles to teach itself chess by playing against itself. Nevertheless, out of 100 games that the novice AlphaZero played against Stockfish 8, AlphaZero won 28 and tied 72—it didn’t lose once. Since AlphaZero had learned nothing from any human, many of its winning moves and strategies seemed unconventional to the human eye. They could be described as creative, if not downright genius.
Can you guess how long AlphaZero spent learning chess from scratch, preparing for the match against Stockfish 8, and developing its genius instincts? Four hours. For centuries, chess was considered one of the crowning glories of human intelligence. AlphaZero went from utter ignorance to creative mastery in four hours, without the help of any human guide.
https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/
#ai #alphazero #stockfish8 #chess