It is often hard to comprehend both the scale and speed of human economic disasters. The Hungarian pengő suffered an inflation rate of 41,900,000,000,000,000% in 1946 (prices doubled every 15 hours). The Republic of Nauru, once the world’s richest country by GDP per capita and a pioneer of sovereign wealth funds, is now bankrupt and its citizens face a 90% unemployment rate. But for sheer size and scale nothing compares to China’s Great Leap Forward, where 670 million people were thrust headlong into an unprecedented five year long experiment so disastrous, it was discontinued 2 years early and directly led to the Great Chinese Famine, resulting in between 20-50 million deaths.

False Feedback = Large Scale Disasters

There are many questions to be asked about this (all of which I am unqualified to answer), but the most pressing to me are: 1) how did any human society even let this happen to themselves and 2) how did the Communist Party stay in power after? At its core, I believe the answer to question 1 lies in the well documented perversion of the feedback loop as outlined in this old AskHistorians episode:

Link 1: AskHistorians Podcast 031 – China: Great Leap Forward (http://askhistorians.libsyn.com/askhistorians-podcast-031-china-great-leap-forward)

Lack of Feedback = Small Scale Disasters

Absent outright lying, feedback doesn’t just happen, and it doesn’t just arrive when you need it to. It has to be actively encouraged, and then acted upon. When a normal-sounding mother can completely miss every sign that her son was planning the Columbine High School massacre, what hope does anyone have of having the information they need just come along so they can act in time? None.

Link 2: Sue Klebold: My son was a Columbine shooter. This is my story (https://www.ted.com/talks/sue_klebold_my_son_was_a_columbine_shooter_this_is_my_story)

Constant Feedback Helps

This example is in a completely different realm than the first two but I still found it thematically relevant. In all management research the single most effective tool is still the One-on-One: weekly, short, prescheduled. It doesn’t scale, but it establishes and encourages feedback like no other.

Link 3: Manager Tools: One-on-Ones (https://www.manager-tools.com/2005/07/the-single-most-effective-management-tool-part-1)

Constant Feedback is De-Institutionalization

If constant feedback doesn’t scale, then scale starts to be the problem. This is my answer to question 2 posed above, where the Chinese Communist Party was able to cling to power because (among other things) of its scale. Capitalism has also had a tendency to centralize power. Thankfully the edges of modern day capitalism are very different. I was struck by this quote in an old Fred Wilson post: “We are witnessing the de-institutionalization of experimentation.” This is for both good and bad, as he notes, but (we hope) net good.

Link 4: Experiment and Scandal (http://avc.com/2016/05/experiment-and-scandal/)

Insofar as deinstitutionalization will allow for some “spectacular failures” to happen, it should also be comforting to know that the system as a whole will be robust to that failure, first because it will not be wholly dependent (by definition) on that failure, and second because the surviving system will learn from the first.

Artificial Intelligence needs Constant Feedback

I was surprised this week to learn of Libratus, an AI that has soundly defeated professional players at heads up no-limit hold ’em poker (in a fair contest of thousands of hands; these same players beat the first version of Libratus in a prior match). Libratus uses reinforcement learning, building up rules based on “trillions of hands of poker”. Put in the framework of constant feedback, it is obvious that Libratus would eventually win. The humans were emotional; Libratus was not. The humans had imperfect recall; Libratus could even recall their imperfection. The humans took days to adapt to Libratus; the only reason Libratus took one night to adapt is because of humans. Listen to the full thing:

Link 5: Poker Artificial Intelligence with Noam Brown (https://softwareengineeringdaily.com/2017/05/12/poker-artificial-intelligence-with-noam-brown/)

More crucially, constant feedback for artificial intelligence does scale. So the same thing that is guiding humanity to splinter is also encouraging AI to become an insurmountable monolith. Which leaves as the remaining domain for humanity those parts of life which are not amenable to repeated experiments and troves of data.

A final note

Early geeks were fond of the saying: “To err is human, but to really f*** things up you need a computer.” The experience of the past 50 years has in fact shown the complete opposite.

Historical footnote

In 1957, Mao Zedong had become disillusioned with the Soviet style of socialism, and yet remained oddly competitive with the Soviets. When Nikita Khrushchev declared that the Soviet Union would surpass the US in industrial output in 15 years, Mao declared that China would “surpass Britain” in that same time, so launching the mass mobilization that would be called the Great Leap Forward. We know how that ended. But I was also amazed to find that in 2016, Chinese consumers spent $750 billion online, more than the US and UK combined. It took 48 years rather than 15 to achieve Mao’s dream, but China would do it in spite of him, not because of him.

Five links from me every week grouped by theme. Subscribe at slashslashblog.wordpress.com or chat live @swyx.

Advertisements