Great articles and essays by the world’s best journalists and writers.
Great articles and essays by the world’s best journalists and writers.
This is the eighth in a series of interviews with philosophers on race that I am conducting for The Stone. This week’s conversation is with Noam Chomsky, a linguist, political philosopher and one of the world’s most prominent public intellectuals. He is the author of many books, including, most recently, “On Western Terrorism: From Hiroshima to Drone Warfare,” with Andre Vltchek.
– George Yancy
George Yancy: When I think about the title of your book “On Western Terrorism,” I’m reminded of the fact that many black people in the United States have had a long history of being terrorized by white racism, from random beatings to the lynching of more than 3,000 black people (including women) between 1882 and 1968. This is why in 2003, when I read about the dehumanizing acts committed at Abu Ghraib prison, I wasn’t surprised. I recall that after the photos appeared President George W. Bush said that “This is not the America I know.” But isn’t this the America black people have always known?
Noam Chomsky: The America that “black people have always known” is not an attractive one. The first black slaves were brought to the colonies 400 years ago. We cannot allow ourselves to forget that during this long period there have been only a few decades when African-Americans, apart from a few, had some limited possibilities for entering the mainstream of American society.
We also cannot allow ourselves to forget that the hideous slave labor camps of the new “empire of liberty” were a primary source for the wealth and privilege of American society, as well as England and the continent. The industrial revolution was based on cotton, produced primarily in the slave labor camps of the United States.
As is now known, they were highly efficient. Productivity increased even faster than in industry, thanks to the technology of the bullwhip and pistol, and the efficient practice of brutal torture, as Edward E. Baptist demonstrates in his recent study, “The Half Has Never Been Told.” The achievement includes not only the great wealth of the planter aristocracy but also American and British manufacturing, commerce and the financial institutions of modern state capitalism.
It is, or should be, well-known that the United States developed by flatly rejecting the principles of “sound economics” preached to it by the leading economists of the day, and familiar in today’s sober instructions to latecomers in development. Instead, the newly liberated colonies followed the model of England with radical state intervention in the economy, including high tariffs to protect infant industry, first textiles, later steel and others.
There was also another “virtual tariff.” In 1807, President Jefferson signed a bill banning the importation of slaves from abroad. His state of Virginia was the richest and most powerful of the states, and had exhausted its need for slaves. Rather, it was beginning to produce this valuable commodity for the expanding slave territories of the South. Banning import of these cotton-picking machines was thus a considerable boost to the Virginia economy. That was understood. Speaking for the slave importers, Charles Pinckney charged that “Virginia will gain by stopping the importations. Her slaves will rise in value, and she has more than she wants.” And Virginia indeed became a major exporter of slaves to the expanding slave society.
Some of the slave-owners, like Jefferson, appreciated the moral turpitude on which the economy relied. But he feared the liberation of slaves, who have “ten thousand recollections” of the crimes to which they were subjected. Fears that the victims might rise up and take revenge are deeply rooted in American culture, with reverberations to the present.
The Thirteenth Amendment formally ended slavery, but a decade later “slavery by another name” (also the title of an important study by Douglas A. Blackmon) was introduced. Black life was criminalized by overly harsh codes that targeted black people. Soon an even more valuable form of slavery was available for agribusiness, mining, steel — more valuable because the state, not the capitalist, was responsible for sustaining the enslaved labor force, meaning that blacks were arrested without real cause and prisoners were put to work for these business interests. The system provided a major contribution to the rapid industrial development from the late 19th century.
That system remained pretty much in place until World War II led to a need for free labor for the war industry. Then followed a few decades of rapid and relatively egalitarian growth, with the state playing an even more critical role in economic development than before. A black man might get a decent job in a unionized factory, buy a house, send his children to college, along with other opportunities. The civil rights movement opened other doors, though in limited ways. One illustration was the fate of Martin Luther King’s efforts to confront northern racism and develop a movement of the poor, which was effectively blocked.
The neoliberal reaction that set in from the late ‘70s, escalating under Reagan and his successors, hit the poorest and most oppressed sectors of society even more than the large majority, who have suffered relative stagnation or decline while wealth accumulates in very few hands. Reagan’s drug war, deeply racist in conception and execution, initiated a new Jim Crow, Michelle Alexander’s apt term for the revived criminalization of black life, evident in the shocking incarceration rates and the devastating impact on black society.
Reality is of course more complex than any simple recapitulation, but this is, unfortunately, a reasonably accurate first approximation to one of the two founding crimes of American society, alongside of the expulsion or extermination of the indigenous nations and destruction of their complex and rich civilizations.
G.Y.: While Jefferson may have understood the moral turpitude upon which slavery was based, in his “Notes on the State of Virginia,” he says that black people are dull in imagination, inferior in reasoning to whites, and that the male orangutans even prefer black women over their own. These myths, along with the black codes following the civil war, functioned to continue to oppress and police black people. What would you say are the contemporary myths and codes that are enacted to continue to oppress and police black people today?
N.C.: Unfortunately, Jefferson was far from alone. No need to review the shocking racism in otherwise enlightened circles until all too recently. On “contemporary myths and codes,” I would rather defer to the many eloquent voices of those who observe and often experience these bitter residues of a disgraceful past.
Perhaps the most appalling contemporary myth is that none of this happened. The title of Baptist’s book is all too apt, and the aftermath is much too little known and understood.
There is also a common variant of what has sometimes been called “intentional ignorance” of what it is inconvenient to know: “Yes, bad things happened in the past, but let us put all of that behind us and march on to a glorious future, all sharing equally in the rights and opportunities of citizenry.” The appalling statistics of today’s circumstances of African-American life can be confronted by other bitter residues of a shameful past, laments about black cultural inferiority, or worse, forgetting how our wealth and privilege was created in no small part by the centuries of torture and degradation of which we are the beneficiaries and they remain the victims. As for the very partial and hopelessly inadequate compensation that decency would require — that lies somewhere between the memory hole and anathema.
Jefferson, to his credit, at least recognized that the slavery in which he participated was “the most unremitting despotism on the one part, and degrading submissions on the other.” And the Jefferson Memorial in Washington displays his words that “Indeed I tremble for my country when I reflect that God is just: that his justice cannot sleep forever.” Words that should stand in our consciousness alongside of John Quincy Adams’s reflections on the parallel founding crime over centuries, the fate of “that hapless race of native Americans, which we are exterminating with such merciless and perfidious cruelty…among the heinous sins of this nation, for which I believe God will one day bring [it] to judgment.”
What matters is our judgment, too long and too deeply suppressed, and the just reaction to it that is as yet barely contemplated.
G.Y.: This “intentional ignorance” regarding inconvenient truths about the suffering of African- Americans can also be used to frame the genocide of Native Americans. It was 18th century Swedish taxonomist Carolus Linnaeus who argued that Native Americans were governed by traits such as being “prone to anger,” a convenient myth for justifying the need for Native Americans to be “civilized” by whites. So, there are myths here as well. How does North America’s “amnesia” contribute to forms of racism directed uniquely toward Native Americans in our present moment and to their continual genocide?
N.C.: The useful myths began early on, and continue to the present. One of the first myths was formally established right after the King of England granted a Charter to the Massachusetts Bay Colony in 1629, declaring that conversion of the Indians to Christianity is “the principal end of this plantation.” The colonists at once created the Great Seal of the Colony, which depicts an Indian holding a spear pointing downward in a sign of peace, with a scroll coming from his mouth pleading with the colonists to “Come over and help us.” This may have been the first case of “humanitarian intervention” — and, curiously, it turned out like so many others.
Years later Supreme Court Justice Joseph Story mused about “the wisdom of Providence” that caused the natives to disappear like “the withered leaves of autumn” even though the colonists had “constantly respected” them. Needless to say, the colonists who did not choose “intentional ignorance” knew much better, and the most knowledgeable, like Gen. Henry Knox, the first secretary of war of the United States, described “the utter extirpation of all the Indians in most populous parts of the Union [by means] more destructive to the Indian natives than the conduct of the conquerors of Mexico and Peru.”
Knox went on to warn that “a future historian may mark the causes of this destruction of the human race in sable colors.” There were a few — very few — who did so, like the heroic Helen Jackson, who in 1880 provided a detailed account of that “sad revelation of broken faith, of violated treaties, and of inhuman acts of violence [that] will bring a flush of shame to the cheeks of those who love their country.” Jackson’s important book barely sold. She was neglected and dismissed in favor of the version presented by Theodore Roosevelt, who explained that “The expansion of the peoples of white, or European, blood during the past four centuries…has been fraught with lasting benefit to most of the peoples already dwelling in the lands over which the expansion took place,” notably those who had been “extirpated” or expelled to destitution and misery.
The national poet, Walt Whitman, captured the general understanding when he wrote that “The nigger, like the Injun, will be eliminated; it is the law of the races, history… A superior grade of rats come and then all the minor rats are cleared out.” It wasn’t until the 1960s that the scale of the atrocities and their character began to enter even scholarship, and to some extent popular consciousness, though there is a long way to go.
That’s only a bare beginning of the shocking record of the Anglosphere and its settler-colonial version of imperialism, a form of imperialism that leads quite naturally to the “utter extirpation” of the indigenous population — and to “intentional ignorance” on the part of beneficiaries of the crimes.
G.Y.: Your response raises the issue of colonization as a form of occupation. James Baldwin, in his 1966 essay, “A Report from Occupied Territory,” wrote, “Harlem is policed like occupied territory.” This quote made me think of Ferguson, Mo. Some of the protesters in Ferguson even compared what they were seeing to the Gaza Strip. Can you speak to this comparative discourse of occupation?
N.C.: All kinds of comparisons are possible. When I went to the Gaza Strip a few years ago, what came to mind very quickly was the experience of being in jail (for civil disobedience, many times): the feeling, very strange to people who have had privileged lives, that you are totally under the control of some external authority, arbitrary and if it so chooses, cruel. But the differences between the two cases are, of course, vast.
More generally, I’m somewhat skeptical about the value of comparisons of the kind mentioned. There will of course be features common to the many diverse kinds of illegitimate authority, repression and violence. Sometimes they can be illuminating; for example, Michelle Alexander’s analogy of a new Jim Crow, mentioned earlier. Often they may efface crucial distinctions. I don’t frankly see anything general to say of much value. Each comparison has to be evaluated on its own.
G.Y.: These differences are vast and I certainly don’t want to conflate them. Post-911 seems to have ushered in an important space for making some comparisons. Some seem to think that Muslims of Arab descent have replaced African-Americans as the pariah in the United States. What are your views on this?
N.C.: Anti-Arab/Muslim racism has a long history, and there’s been a fair amount of literature about it. Jack Shaheen’s studies of stereotyping in visual media, for example. And there’s no doubt that it’s increased in recent years. To give just one vivid current example, audiences flocked in record-breaking numbers to a film, described in The New York Times Arts section as “a patriotic, pro-family picture,” about a sniper who claims to hold the championship in killing Iraqis during the United States invasion, and proudly describes his targets as “savage, despicable, evil … really no other way to describe what we encountered there.” This was referring specifically to his first kill, a woman holding a grenade when under attack by United States forces.
What’s important is not just the mentality of the sniper, but the reaction to such exploits at home when we invade and destroy a foreign country, hardly distinguishing one “raghead” from another. These attitudes go back to the “merciless Indian savages” of the Declaration of Independence and the savagery and fiendishness of others who have been in the way ever since, particularly when some “racial” element can be invoked — as when Lyndon Johnson lamented that if we let down our guard, we’ll be at the mercy of “every yellow dwarf with a pocket knife.” But within the United States, though there have been deplorable incidents, anti-Arab/Muslim racism among the public has been fairly restrained, I think.
G.Y.: Lastly, the reality of racism (whether it’s anti-black, anti-Arab, anti-Jewish, etc.) is toxic. While there is no single solution to racism, especially in terms of its various manifestations, what do you see as some of the necessary requirements for ending racist hatred?
N.C.: It’s easy to rattle off the usual answers: education, exploring and addressing the sources of the malady, joining together in common enterprises — labor struggles have been an important case — and so on. The answers are right, and have achieved a lot. Racism is far from eradicated, but it is not what it was not very long ago, thanks to such efforts. It’s a long, hard road. No magic wand, as far as I know.
This interview was conducted by email and edited. Previous interviews in this series (with Linda Martin Alcoff, Judith Butler, Joy James, Charles Mills, Falguni A. Sheth, Shannon Sullivan and Naomi Zack) can be found here.
George Yancy is a professor of philosophy at Duquesne University. He has written, edited and co-edited numerous books, including “Black Bodies, White Gazes,” “Look, a White!” and “Pursuing Trayvon Martin,” co-edited with Janine Jones.
 by Dirk Helbing (ETH Zurich)
The PDF of this article can be downloaded here
In the wake of the on-going digital revolution, we will see a dramatic transformation of our economy and most of our societal institutions. While the benefits of this transformation can be massive, there are also tremendous risks to our society. After the automation of many production processes and the creation of self-driving vehicles, the automation of society is next. This is moving us to a tipping point and to a crossroads: we must decide between a society in which the actions are determined in a top-down way and then implemented by coercion or manipulative technologies (such as personalized ads and nudging) or a society, in which decisions are taken in a free and participatory way and mutually coordinated. Modern information and communication systems (ICT) enable both, but the latter has economic and strategic benefits. The fundaments of human dignity, autonomous decision-making, and democracies are shaking, but I believe that they need to be vigorously defended, as they are not only core principles of livable societies, but also the basis of greater efficiency and success.
“Those who surrender freedom for security  will not have, nor do they deserve, either one.”Benjamin Franklin
In a globalized world, companies and countries are exposed to a harsh competition. This produces a considerable pressure to create more efficient systems – a tendency which is re-inforced by high debt levels.
Big Data seems to be a suitable answer to this. Mining Big Data offers the potential to create new ways to optimize processes, identify interdependencies and make informed decisions. There’s no doubt that Big Data creates new business opportunities, not just because of its application in marketing, but also because information itself is becoming monetized.
Technology gurus preach that Big Data is becoming the oil of the 21st century, a new commodity that can be tapped for profit. As the virtual currency BitCoin temporarily became more valuable than gold, it can be even literally said that data can be mined into money in a way which would previously have been considered a fairy tale. Although many Big Data sets are proprietary, the consultancy company McKinsey recently estimated that the additional value of Open Data alone amounts to be $3-5 trillion per year. If the worth of this publicly available information were evenly divided among the world’s population, every person on Earth would receive an additional $700 per year. We now see Open Government initiatives all over the world, aiming to improve services to citizens while having to cut costs. Even the G8 is pushing for Open Data as this is crucial to mobilize the full societal and economic capacity.
The potential of Big Data spans every area of social activity, from the processing of human language and the management of financial assets, to the harnessing of information enabling large cities to manage the balance between energy consumption and production. Furthermore, Big Data holds the promise to help protect our environment, to detect and reduce risks, and to discover opportunities that would otherwise have been missed. In the area of medicine, Big Data could make it possible to tailor medications to patients, thereby increasing their effectiveness and reducing their side effects. Big Data could also accelerate the research and development of new drugs and focus resources on the areas of greatest need.
Big Data applications are spreading like wildfire. They facilitate personalized offers, services and products. One of the greatest successes of Big Data is automatic speech recognition and processing. Apple’s Siri understands you when asking for a restaurant, and Google Maps can lead you there. Google translate interprets foreign languages by comparing them with a huge collection of translated texts. IBM’s Watson computer even understands human language. It can not only beat experienced quiz show players, but take care of customer hotlines and patients – perhaps better than humans. IBM has just decided to invest $1 billion to further develop and commercialize the system.
Of course, Big Data play an important role in the financial sector. Approximately seventy percent of all financial market transactions are now made by automated trading algorithms. In just one day, the entire money supply of the world is traded. So much money also attracts organized crime. Therefore, financial transactions are scanned by Big Data algorithms for abnormalities to detect suspicious activities. The company Blackrock uses a similar software, called “Aladdin”, to successfully speculate with funds amounting approximately to the gross domestic product (GDP) of Europe.
The Big Data approach is markedly different from classical data mining approaches, where datasets have been carefully collected and carefully curated in databases by scientists or other experts. However, each year we now produce as much data as in the entire history of humankind, i.e. in all the years before. This exceeds by far human capacities to curate all data. In just one minute, 700,000 google queries and 500,000 facebook comments are sent. Besides this, enormous amounts of data are produced by all the traces that human activities are now leaving in the Internet. This includes shopping and financial data, geo-positioning and mobility data, social contacts, opinions posted in social networks, files stored in dropbox or some other cloud storage, emails posted or received through free accounts, ebooks read, including time spent on each page and sentences marked, Google or Apple Siri queries asked, youtube or TV movies watched on demand, and games played. Modern game engines and smart home equipment would also sense your activities at home, digital glasses would transmit what you see, and gene data are also massively gathered now.
Meanwhile, the data sets collected by companies such as ebay, Walmart or Facebook, reach the size of petabytes (1 million billion bytes) – one hundred times the information content of the largest library in the world: the U.S. Library of Congress. The mining of Big Data opens up entirely new possibilities for process optimization, the identification of interdependencies, and decision support. However, Big Data also comes with new challenges, which are often characterized by four criteria:
Therefore, completely new algorithms had to be developed, i.e. new computational methods.
To create value from data, it is crucial to turn raw data into useful information and actionable knowledge, some even aim at producing “wisdom” and “clairvoyance” (predictive capabilities). This process requires powerful computer algorithms. Machine learning algorithms do not only watch out for particular patterns, but find patterns even by themselves. This has led Chris Anderson to famously postulate “the end of theory”, i.e. the hypothesis that the data deluge makes the scientific method obsolete. If there would be just a big enough quantity of data, machine learning could turn it into high-quality data and come to the right conclusions. This hypothesis has become the credo of Big Data analytics, even though this almost religious belief lacks a proper foundation. I am therefore calling here for a proof of concept, by formulating the following test: Can universal machine learning algorithms, when mining huge masses of experimental physics data, discover the laws of nature themselves, without the support of human knowledge and intelligence?
In spite of these issues, deep learning algorithms are celebrating great successes in everyday applications that do not require an understanding of a hidden logic or causal interdependencies. These algorithms are universal learning procedures which, theoretically, could learn any pattern or input-output relation, given enough time and data. Such algorithms are particularly strong in pattern recognition tasks, i.e. reading, listening, watching, and classifying contents. As a consequence, experts believe that about 50% of all current jobs in the industrial and service sectors will be lost in the next 10-20 years. Moreover, abilities comparable to the human brain are expected to be reached within the next 5 to 25 years. This has led to a revival of Artificial Intelligence, now often coming under the label “Cognitive Computing”.
To be competitive with intelligent machines, humans will in future increasingly need “cognitive assistants”. These are digital tools such as Google Now. However, as cognitive assistants get more powerful at exponentially accelerating pace, they would soon become something like virtual colleagues, then something like digital coaches, and finally our bosses. Robots acting as bosses are already being tested.
Scientists are also working on “biological upgrades” for humans. The first cyborgs, i.e. humans that have been technologically upgraded, already exist. The most well-known of them is Neil Harbisson. At the same time, there is large progress in producing robots that look and behave increasingly like humans. It must be assumed that many science fiction phantasies shown in cinemas and on TV may soon become reality.
Recently, however, there are increasing concerns about artificial super-intelligences, i.e. machines that would be more intelligent than humans. In fact, computers are now better at calculating, at playing chess and most other strategic games, at driving cars, and they are performing many other specialized tasks increasingly well. Certainly, intelligent multi-purpose machines will soon exist.
Only two or three years ago, most people would have considered it impossible that algorithms, computers, or robots would ever challenge humans as crown of creation. This has changed. Intelligent machines are learning themselves, and it’s now conceivable that robots build other robots that are smarter. The resulting evolutionary progress is quickly accelerating, and it is therefore just a matter of time until there are machines smarter than us. Perhaps such super-intelligences already exist. In the following, I am presenting some related quotes of some notable scientists and technology experts, who raise concerns and try to alert the public of the problems we are running into:
For example, Elon Musk of Tesla Motors voiced:
“I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful. … I am increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. … ”
Similar critique comes from Nick Bostrom at Oxford University.
Stephen Hawking, the most famous physicist to date, recently said:
“Humans who are limited by slow biological evolution couldn’t compete and would be superseded. … The development of full artificial intelligence could spell the end of the human race. … It would take off on its own, and re-design itself at an ever increasing rate.”
Furthermore, Bill Gates of Microsoft was quoted:
“I am in the camp that is concerned about super intelligence. … I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
Steve Wozniak, co-founder of Apple, formulated his worries as follows:
“Computers are going to take over from humans, no question … Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people … If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently … Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know …”
Personally, I think more positively about artificial intelligence, but I believe that we should engage in distributed collective intelligence rather than creating a few extremely powerful super-intelligences we may not be able to control. It seems that various big IT companies in the Silicon Valley are already engaged in building super-intelligent machines. It was also recently reported that Baidu, the Chinese search engine, wanted to build a “China Brain Project”, and was looking for significant financial contributions by the military. Therefore, to be competitive, do we need to sacrifice our privacy for a society-spanning Big Data and Deep Learning project to predict the future of the world? As will become clear later on, I don’t think so, because Big Data approaches and the learning of facts from the past are usually bad a predicting fundamental shifts as they occur at societal tipping points, while this is what we mainly need to care about. The combination of explanatory models with little (but right kind of) data is often superior. This can deliver a better description of macro-level societal and economic change, as I will show below, and it’s macro-level effects that really matter. Additionally, one should invest in tools that allow one to reveal mechanisms for the management and design of better systems. Such innovative solutions, too, cannot be found by mining data of the past and learning patterns in them.
Personal data of all kinds are now being collected by many companies, most of which are not well-known to the public. While we surf the Internet, every single click is recorded by cookies, super-cookies and other processes, mostly without our consent. These data are widely traded, even though this often violates applicable laws. By now, there are about 3,000 to 5,000 personal records of more or less every individual in the industrialized world. These data make it possible to map the way each person thinks and feels. Their clicks would not only produce a unique fingerprint identifying them (perhaps even when surfing anonymously). They would also reveal the political party they are likely to vote for (even though the anonymous vote is an important basis of democracies). Their google searches would furthermore reveal the likely actions they are going to take next (including likely financial trades). There are even companies such as Recorded Future and Palantir that try to predict future individual behavior based on the data available about each of us. Such predictions seem to work pretty well, in more than 90% of all cases. It is often believed that this would eventually make the future course of our society predictable and controllable.
In the past, the attitude was “nobody is perfect, people make mistakes”. Now, with the power of modern information technologies, some keen strategists hope that our society could be turned into a perfect clockwork. The feasibility of this approach is already being tested. Personalized advertisement is in fact trying to manipulate people’s choices, based on the detailed knowledge of a person, including how he/she thinks, feels, and responds to certain kinds of situations. These approaches become increasingly effective, making use of biases in human decision-making and also subliminal messages. Such techniques address people’s subconsciousness, such that they would not necessarily be aware of the reasons causing their actions, similar to acting under hypnosis.
Manipulating people’s choices is also increasingly being discussed as policy tool, called “nudging” or “soft paternalism”. Here, people’s decisions and actions would be manipulated by the state through digital devices to reach certain outcomes, e.g. environmentally friendly or healthier behavior, or also certain election results. Related experiments are being carried out already.
In the following, I will discuss some of the social, economic, legal, ethical and other implications of the above digital technologies and their use. Like all other technologies, the use of Big Data, Artificial Intelligence, and Nudging can produce potentially harmful side effects, but in this case the impact on our economy and society may be massive. To benefit from the opportunities of digital technologies and minimize their risks, it will be necessary to combine certain technological solutions with social norms and legal regulations. In the following, I attempt to give a number of initial hints, but the discussion below can certainly not give a full account of all issues that need to be addressed.
The risks of Big Data are manifold. The security of digital communication has been undermined. Cyber crime, including data, identity and financial theft, is exploding, now producing an annual damage of the order of 3 trillion dollars, which is exponentially growing. Critical infrastructures such as energy, financial and communication systems are threatened by cyber attacks. They could, in principle, be made dysfunctional for an extended period of time, thereby seriously disrupting our economy and society. Concerns about cyber wars and digital weapons (D weapons) are quickly growing, as they may be even more dangerous than atomic, biological and chemical (ABC) weapons.
Besides cyber risks, there is a pretty long list of other problems. Results of Big Data analytics are often taken for granted and objective. This is dangerous, because the effectiveness of Big Data is sometimes based more on beliefs than on facts. It is also far from clear that surveillance cameras and predictive policing can really significantly reduce organized and violent crime or that mass surveillance is more effective in countering terrorism than classical investigation methods. Moreover, one of the key examples of the power of Big Data analytics, Google Flu Trends, has recently been found to make poor predictions. This is partly because advertisements bias user behaviors and search algorithms are being changed, such that the results are not stable and reproducible. In fact, Big Data curation and calibration efforts are often low. As a consequence, the underlying datasets are typically not representative and they may contain many errors. Last but not least, Big Data algorithms are frequently used to reveal optimization potentials, but their results may be unreliable or may not reflect any causal relationships. Therefore, conclusions from Big Data are not necessarily correct.
A naive application of Big Data algorithms can easily lead to mistakes and wrong conclusions. The error rate in classification problems (e.g. the distinction between “good” and “bad” risks) is often significant. Issues such as wrong decisions or discrimination are serious problems. In fact, anti-discrimination laws may be implicitly undermined, as results of Big Data algorithms may imply disadvantages for women, handicapped people, or ethnic, religious, and other minorities. This is, because insurance offers, product prices of Internet shops, and bank loans increasingly depend on behavioral variables, and on specifics of the social environment, too. It might happen, for example, that the conditions of a personal loan depend on the behavior of people one has never met. In the past, some banks have even terminated loans, when neighbors have failed to make their payments on time. In other words, as we lose control over our personal data, we are losing control over our lives, too. How will we then be able to take responsibility for our life in the future, if we can’t control it any longer?
This brings us to the point of privacy. There are a number of important points to be considered. First of all, surveillance scares people, particularly minorities. All minorities are vulnerable, but the success of our society depends on them (e.g. politicians, entrepreneurs, intellectuals). As the “Volkszählungsurteil” correctly concludes, the continuous and uncontrolled recording of data about individual behaviors is undermining chances of personal, but also societal development. Society needs innovation to adjust to change (such as demographic, environmental, technological or climate change). However, innovation needs a cultural setting that allows to experiment and make mistakes. In fact, many fundamental inventions have been made by accident or even mistake (Porcelain, for example, resulted from attempts to produce gold). A global map of innovation clearly shows that fundamental innovation mainly happens in free and democratic societies. Experimenting is also needed to become an adult who is able to judge situations and take responsible decisions.
Therefore, society needs to be run in a way that is tolerant to mistakes. But today one may get a speed ticket for having been 1km/h too fast (see below). In future, in our over-regulated world, one might get tickets for almost anything. Big Data would make it possible to discover and sanction any small mistake. In the USA, there are already 10 times more people in prison than in Europe (and more than in China and Russia, too). Is this our future, and does it have anything to do with the free society we used to live in? However, if we would punish only a sample of people making mistakes, how would this be compatible with fairness? Wouldn’t this end in arbitrariness and undermine justice? And wouldn’t the principle of assumed innocence be gone, which is based on the idea that the majority of us are good citizens, and only a few are malicious and to be found guilty?
Undermining privacy can’t work well. It questions trust in the citizens, and this undermines the citizens’ trust in the government, which is the basis of its legitimacy and power. The saying that “trust is good, but control is better” is not entirely correct: control cannot fully replace trust. A well-functioning and efficient society needs a suitable combination of both.
“Public” without “private” wouldn’t work well. Privacy provides opportunities to explore new ideas and solutions. It helps to recover from the stress of daily adaptation and reduces conflict in a dense population of people with diverse preferences and cultural backgrounds.
Public and private are two sides of the same medal. If everything is public, this will eventually undermine social norms. On the long run, the consequence could be a shameless society, or if any deviation from established norms is sanctioned, a totalitarian society.
|… and I was not even the driver…|
Therefore, while the effects of mass surveillance and privacy intrusion are not immediately visible, they might still cause a long-term damage by undermining the fabric of our society: social norms and culture. It is highly questionable whether the economic benefits would really outweight this, and whether a control-based digital society would work at all. I rather expect such societal experiments to end in disaster.
The globalization and networking of our world has caused a level of interdependency and complexity that no individual can fully grasp. This leads to the awkward situation that every one of us sees only part of the picture, which has promoted the idea that we should have artificial super-intelligences that may be able to overlook the entire knowledge of the world. However, learning such knowledge (not just the facts, but also the implications) might progress more slowly than our world changes and human knowledge progresses.
It is also important to consider that the meaning of data depends on context. This becomes particularly clear for ambiguous content. Therefore, like our own brain, an artificial intelligence based on deep learning will sometimes see spurious correlations, and it will probably have some prejudices, too.
Unfortunately, having more information than humans (as cognitive computers have it today) doesn’t mean to be objective or right. The problem of “over-fitting”, according to which there is a tendency to fit meaningless, random patterns in the data is just one possible issue. The problems of parameter sensitivity or of “chaotic” or “turbulent” system dynamics will restrict possibilities to predict future events, to assess current situations, or even to identify the correct model parameters describing past events. Despite these constraints, a data-driven approach would always deliver some output, but this might be just an “opinion” of an intelligent machine rather than a fact. This becomes clear if we assume to run two identical super-intelligent machines in different places. As they are not fed with exactly the same information, they would have different learning histories, and would sometimes come to different conclusions. So, super-intelligence is no guarantee to find a solution that corresponds to the truth. And what if a super-intelligent machine catches a virus and gets something like a “brain disease”?
The greatest problem is that we might be tempted to apply powerful tools such as super-intelligent machines to shape our society at large. As it became obvious above, super-intelligences would make mistakes, too, but the resulting damage might be much larger and even disastrous. Besides this, super-intelligences might emancipate themselves and become uncontrollable. They might also start to act in their own interest, or lie.
Most importantly, powerful tools will always attract people striving for power, including organized criminals, extremists, and terrorists. This is particularly concerning because there is no 100% reliable protection against serious misuse. At the 2015 WEF meeting, Misha Glenny said: “There are two types of companies in the world: those that know they’ve been hacked, and those that don’t”. In fact, even computer systems of many major companies, the US military, the Pentagon and the White House have been hacked in the past, not to talk about the problem of data leaks… Therefore, the growing concerns regarding building and using super-intelligences seem to be largely justified.
The use of information technology is changing our behavior. This fact invites potential misuse, too. Society-scale experiments with manipulative technologies are likely to have serious side effects. In particular, influencing people’s decision-making undermines the principle of the “wisdom of crowds”, on which democratic decision-making and also the functioning of financial markets is based. For the “wisdom of crowds” to work, one requires sufficiently well educated people who gather and judge information separately and make their decisions independently. Influencing people’s decisions will increase the likelihood of mistakes, which might be costly. Moreover, the information basis may get so biased over time that no one, including government institutions and intelligent machines, might be able to make reliable judgments.
Eli Pariser raised a related issue, which he called the “filter bubble”. As we increasingly live in a world of personalized information, we are less and less confronted with information that doesn’t fit our beliefs and taste. While this creates a feeling to live in the world we like, we will lose awareness of other people’s needs and their points of view. When confronted with them, we may fail to communicate and interact constructively. For example, the US political system seems to increasingly suffer from the inability of republicans and democrats to make compromises that are good for the country. When analyzing their political discourse on certain subjects, it turns out that they don’t just have different opinions, but they also use different words, such that there is little chance to develop a shared understanding of a problem. Therefore, some modern information systems haven’t made it easier to govern a country – on the contrary.
In perspective, manipulative technologies may be seen as attempts to “program” people. Some crowd sourcing techniques such as the services provided by Mechanical Turk come already pretty close to this. Here, people pick up jobs of all kinds, which may just take a few minutes. For example, you may have a 1000 page manual translated in a day, by breaking it down into sufficiently many micro-translation jobs. In principle, however, one could think of anything, and people might not even be aware of the outcome they are jointly producing.
Importantly, manipulation incapacitates people, and it makes them less capable of solving problems by themselves. On the one hand, this means that they increasingly lose control of their judgments and decision-making. On the other hand, who should be held responsible for mistakes that are based on manipulated decisions? The one who took the wrong decision or the one who made him or her take the wrong decision? Probably the latter, particularly as human brains can decreasingly keep up with the performance of computer systems (think, for example, of high-frequency trading).
Finally, we must be aware of another important issue. Some keen strategists believe that manipulative technologies would be perfect tools to create a society that works like a perfect machine. The idea behind this is as follows: A super-intelligent machine would try to figure out an optimal solution to a certain problem, and it would then try to implement it using punishment or manipulation, or both. In this context, one should evaluate again what purposes recent editions of security laws (such as the BÜPF) might be used for, besides fighting true terrorists. It is certainly concerning if people can be put to jail for contents on their computer hard disks, while at the same time hard disks are known to have back doors, and secret services are allowed to download materials to them. This enables serious misuse, but it also questions whether hard disk contents can be still accepted as evidence at court.
Of course, one must ask, whether it would be really possible to run a society by a combination of surveillance, manipulation and coercion. The answer is: probably yes, but given the complexity of our world, I expect this would not work well and not for long. One might therefore say that, in complex societies, the times where a “wise king” or “benevolent dictator” could succeed are gone. But there is the serious danger that some ambitious people might still try to implement the concept and take drastic measures in desperate attempts to succeed. Minorities, who are often seen to produce “unnecessary complexity”, would probably get under pressure. This would reduce social, cultural and economic diversity.
As a consequence, this would eventually lead to a socio-economic “diversity collapse”, i.e. many people would end up behaving similarly. While this may appear favorable to some people, one must recognize that diversity is the basis of innovation, economic development, societal resilience, collective intelligence, and individual happiness. Therefore, socio-economic and cultural diversity must be protected in a similar way as we have learned to protect biodiversity.
Altogether, it is more appropriate to compare a social or economic system to an ecosystem than to a machine. It then becomes clear that a reduction of diversity corresponds to the loss of biological species in an ecosystem. In the worst case, the ecosystem could collapse. By analogy, the social or economic system would lose performance and become less functional. This is what typically happens in totalitarian regimes, and it often ends with wars as a result of attempts to counter the systemic instability caused by a diversity collapse.
In conclusion, to cope with diversity, engaging in interoperability is largely superior to standardization attempts. That is why I am suggesting below to develop personal digital assistants that help to create benefits from diversity.
I am a strong supporter of using digital technologies to create new business opportunities and to improve societal well-being. Therefore, I think one shouldn’t stop the digital revolution. (Such attempts would anyway fail, given that all countries are exposed to harsh international competition.) However, like with every technology, there are also potentially serious side effects, and there is a dual use problem.
If we use digital technologies in the wrong way, it could be disastrous for our economy, ending in mass unemployment and economic depression. Irresponsible uses could also be bad for our society, potentially ending (intentionally or not) in more or less totalitarian regimes with little individual freedoms. There are also serious security issues due to exponentially increasing cyber crime, which is partially related to the homogeneity of our current Internet, the lack of barriers (for the sake of efficiency), and the backdoors in many hard- and software systems.
Big Data produces further threats. It can be used to ruin personal careers and companies, but also to launch cyber wars. As we don’t allow anyone to own a nuclear bomb or to drive a car without breaks and other safety equipment, we must regulate and control the use of Big Data, too, including the use of Big Data by governments and secret services. This seems to require a sufficient level of transparency, otherwise it is hard to judge for anyone whether we can trust such uses and what are the dangers.
The use of Big Data should meet certain quality standards. This includes the following aspects:
It must be clear that digital technologies will only thrive if they are used in a responsible way. For companies, the trust of consumers and users is important to gain and maintain a large customer base. For governments, public trust is the basis of legitimacy and power. Losing trust would, therefore, cause irrevocable damage.
The current problem is the use of cheap technology. For example, most software is not well tested and not secure. Therefore, Europe should invest in high-quality services of products. Considering the delay in developing data products and services, Europe must anyway find a strategy that differentiates itself from its competitors (which could include an open data and open innovation strategy, too).
Most, if not all functionality currently produced with digital technologies (including certain predictive “Crystal Ball” functionality) can be also obtained in different ways, particularly in ways that are compatible with constitutional and data protection laws (see also the Summary, Conclusion, and Discussion). This may come at higher costs and slightly reduced efficiency, but it might be cheaper overall than risking considerable damage (think of the loss of 3 trillion dollars by cybercrime each year and consider that this number is still exponentially increasing). Remember also that we have imposed safety requirements on nuclear, chemical, genetic, and other technologies (such as cars and planes) for good reasons. In particular, I believe that we shouldn’t (and wouldn’t need to) give up the very important principle of informational self-determination in order to unleash the value of personal data. Informational self-control is of key importance to keep democracy, individual freedom, and responsibility for our lives. To reach catalytic and synergy effects, I strongly advise to engage in culturally fitting uses of Information and Communication Technologies (ICT).
In order to avoid slowing down beneficial data applications too much, one might think of continuously increasing standards. Some new laws and regulations might become applicable within 2 or 3 years time, to give companies a sufficiently long time to adjust their products and operation. Moreover, it would be useful to have some open technology standards such that all companies (also small and medium-sized ones) have a chance to meet new requirements with reasonable effort. Requiring a differentiated kind of interoperability could be of great benefit.
Modern data application go beyond Big Data analytics towards (semi-)automatic systems, which typically offer possibilities for users to control certain system parameters (but sometimes there is just the possibility to turn the automatic or the system off). Autopilot systems, high-frequency trading, and self-driving cars are well-known examples. Would we in future even see an automation of society, including an automated voting by digital agents mirroring ourselves?
Automated or autonomous systems are often not a 100 percent controllable, as they may operate at a speed that humans cannot compete with. One must also realize that today’s artificial intelligent systems are not fully programmed. They learn, and they may therefore behave in ways that have not been tested before. Even if their components would be programmed line by line and would be thoroughly tested without showing any signs of error, the interaction of the system components may lead to unexpected behaviors. For example, this is often the case when a car with sophisticated electronic systems shows surprising behavior (such as suddenly not operating anymore). In fact, unexpected (“emergent”) behavior is a typical feature of many complex dynamical systems.
The benefits of intelligent learning systems can certainly be huge. However, we must understand that they will sometimes make mistakes, too, even when automated systems are superior to human task performance. Therefore, one should make a reasonable effort to ensure that mistakes by an automated system are outweighted by its benefits. Moreover, possible damages should be sufficiently small or rare, i.e. acceptable to society. In particular, such damages should not pose any large-scale threats to critical infrastructures, our economy, or our society. As a consequence, I propose the following:
Manipulative technologies are probably the most dangerous among the various digital technologies discussed in this paper, because we might not even notice the manipulation attempts.
In the past, we lived in an information-poor world. Then, we had enough time to assess the value of information, but we did not always have enough information to decide well. With more information (Web search, Wikipedia, digital maps, etc.) orientation is increasingly easy. Now, however, we are faced with a data deluge and are confronted with so much information that we can’t assess and process it all. We are blinded by too much information, and this makes us vulnerable to manipulation. We increasingly need information filters, and the question is, who should produce these information filters? A company? Or the state?
In both cases, this might have serious implications for our society, because the filters would pursue particular interests (e.g. to maximize clicks on ads or to manipulate people in favor of nationalism). In this way, we might get stuck in a “filter bubble”. Even if this filter bubble would feel like a golden cage, it would limit our imagination and capacity of innovation. Moreover, mistakes can and will always happen, even if best efforts to reach an optimum outcome are made.
While some problems can be solved well in a centralized fashion (i.e. in a top-down way), some optimization problems are notoriously hard and better solved in a distributed way. Innovation is one of these areas. The main problem is that the most fundamental question of optimization is unsolved, namely what goal function to choose. When a bad goal function is chosen, this will have bad outcomes, but we may notice this only after many years. As mistakes in choosing the goal function will surely sometimes happen, it could end in disaster when everyone applies the same goal function.
Therefore, one should apply something like a portfolio strategy. Under strongly variable and hardly predictable conditions a diverse strategy works best. Therefore, pluralistic information filtering is needed. In other words, customers, users, and citizens should be able to create, select, share and adapt the information filters they use, thereby creating an evolving ecosystem of increasingly better filters. In fact, everyone would probably be using several different filters (for example, “What’s currently most popular?”, “What’s most controversial?”, “What’s trendy in my peer group?”, “Surprise me!”). In contrast, if we leave it to a company or the state to decide how we see the world, we might happen to end up with biased views, and this could lead to terrible mistakes. This could, for example, undermine the “wisdom of crowds”, which is currently the basis of free markets and democracies (with benefits such as a high level of performance [not necessarily growth], quality of life, and the avoidance of mistakes such as wars among each other).
In a world characterized by information overload, unbiased and reliable information becomes ever more important. Otherwise the number of mistakes will probably increase. For the digital society to succeed, we must therefore take safeguards against information pollution and biases. Reputation systems might be a suitable instrument, if enough information providers compete efficiently with each other for providing more reliable and more useful information. Additionally, legal sanctions might be necessary to counter intentionally misleading information.
Consequently, advertisements should be marked as such, and the same applies to manipulation attempts such as nudging. In other words, the user, customer or citizen must be given the possibility to consciously decide for or against a certain decision or action, otherwise individual autonomy and responsibility are undermined. Similarly as customers of medical drugs are warned of potential side effects, one should state something like “This product is manipulating your decisions and is trying to make you behave in a more healthy way (or in a environmentally friendly way, or whatever it tries to achieve…)”. The customer would then be aware of the likely effects of the information service and could actively decide whether he or she wants this or not.
Note, however, that it is currently not clear what the side effects of incentivizing the use of manipulative technologies would be. If applied on a large scale, it might be almost as bad as hidden manipulation. Dangerous herding effects might occur (including mass psychology as it occurs in hypes, stock market bubbles, unhealthy levels of nationalism, or the particularly extreme form it took during the Third Reich). Therefore,
Ethical guidelines, demanding certain quality standards, and sufficient transparency might also be necessary. Otherwise, the large-scale application of manipulative technologies could intentionally or unintentionally undermine the individual freedom of decision-making and the basis of democracies, particularly when nudging techniques become highly effective and are used to manipulate public opinion at large.
Digital technologies offer great benefits, but also substantial risks. They may help us to solve some long-standing problems, but they may also create new and even bigger issues. In particular, if wrongly used, individual autonomy and freedom, responsible decision-making, democracy and the basis of our legal system are at stake. The foundations on which our society is build might be damaged intentionally or unintentionally within a very short time period, which may not give us enough opportunities to prepare for or respond to the challenges.
Currently, some or even most Big Data practices violate applicable data protection laws. Of course, laws can be changed, but some uses of Big Data are also highly dangerous, and incompatible with our constitution and culture. These challenges must be addressed by a combination of technological solutions (such as personal data stores), legal regulations, and social norms. Distributed data, distributed systems and distributed control, sufficiently many competitors and suitably designed reputation systems might be most efficient to avoid misuses of digital technologies, but transparency must be increased as well.
Even though our economy and society will change in the wake of the digital revolution, we must find a way that is consistent with our values, culture, and traditions, because this will create the largest synergy effects. In other words, a China or Singapore model is unlikely to work well in Europe. We must take the next step in our cultural, economic and societal evolution.
I am convinced that it is now possible to use digital technologies in ways that bring the perspectives of science, politics, business, society, cultural traditions, ethics, and perhaps even religion together. Specifically, I propose to use the Internet of Things as basis for a participatory information system called the Planetary Nervous System or Nervousnet, to support tailored measurements, awareness, coordination, collective intelligence, and informational self-determination. The system I suggest would have a resilient systems design and could be imagined as a huge catalyst of socio-economic value generation. It would also support real-time feedbacks through a multi-dimensional exchange system (“multi-dimensional finance”). This approach would allow one to massively increase the efficiency of many systems, as it would support the self-organization of structures, properties and functions that we would like to have, based on local interactions. The distributed approach I propose is consistent with individual autonomy, free decision-making, the democratic principle of participation, as well as free entrepreneurial activities and markets. In fact, wealth is not only created by producing economies of scale (i.e. cheap mass production), but also by engaging in social interaction (that’s why cities are drivers of the economy).
The proposed approach would also consider (and potentially trade) externalities, thereby supporting other-regarding and fair solutions, which would be good for our environment, too. Finally, everyone could reap the benefits of diversity by using personal digital assistants, which would support coordination and cooperation of diverse actors and reducing conflict.
In conclusion, we have the choice between two kinds of a digital society: (1) a society in which people are expected to obey and perform tasks like a robot or a gearwheel of a perfect machine, characterized by top-down control, limitations of freedom and democracy, and potentially large unemployment rates; (2) a participatory society with space for humans with sometimes surprising behaviors characterized by autonomous but responsible decision-making supported by personal digital assistants, where information is opened up to everyone’s benefits in order to reap the benefits of diversity, creativity, and exponential innovation. What society would you choose?
The FuturICT community (www.futurict.eu) has recently worked out a framework for a smart digital society, which is oriented at international leadership, economic prosperity, social well-being, and societal resilience, based on the well-established principle of subsidiarity. With its largely distributed, decentralized approach, it is designed to cope with the complexity of our globalized world and benefit from it.
The FuturICT approach takes the following insights into account:
As a consequence of the above insights, to reap the benefits of data, I believe we do not need to end privacy and informational self-determination. The best use of information systems is made, if they boost our society and economy to full capacity, i.e. if they use the knowledge, skills, and resources of everyone in the best possible way. This is of strategic importance and requires suitably designed participatory information systems, which optimally exploit the special properties of information. In fact, the value of participatory systems, as pointed out by Jeremy Rifkin and others, becomes particularly clear if we think of the great success of crowd sourcing (Wikipedia, OpenStreetMap, Github, etc.), crowd funding, citizen science and collective (“swarm”) intelligence. So, let’s build these systems together. What are we waiting for?
 This document includes and reproduces some paragraphs of the following documents: “Big Data – Zauberstab und Rohstoff des 21. Jahrhunderts’’ published in Die Volkswirtschaft – Das Magazin fürWirtschaftspolitik (5/2014), see http://www.dievolkswirtschaft.ch/files/ editions/201405/pdf/04_Helbing_DE.pdf; for an English translation see chapter 7 of D. Helbing (2015) Thinking Ahead – Essays on Big Data, Digital Revolution, and Participatory Market Society (Springer, Berlin).
 I would add “efficiency” or “performance” here as well.
 McKinsey and Co. Open data: Unlocking innovation and performance with liquid information, http://www.mckinsey.com/insights/business_technology/open_data_unlocking_innovation_and_performance_with_liquid_information
 Chris Anderson, The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. WIRED Magazin 16.07, http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory
 One of the leading experts in this field is Jürgen Schmidhuber.
 Jeremy Howard, The wonderful and terrifying implications of computers that can learn, TEDx Brussels, http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_ implications_of_computers_that_can_learn
 The point in time when this happens is sometimes called “singularity”, according to Ray Kurzweil.
 Süddeutsche (11.3.2015) Roboter als Chef, http://www.sueddeutsche.de/leben/roboter-am-arbeitsplatz-billig-freundlich-klagt-nicht-1.2373715
 Such movies often serve to familiarize the public with new technologies and realities, and to give them a positive touch (including “Big Brother”).
 James Barrat (2013) Our Final Invention – Artificial Intelligence and the End of the Human Era (Thomas Dunne Books). Edge Question 2015: What do you think about machines that think? http://edge.org/annual-question/what-do-you-think-about-machines-that-think
 Nick Bostrom (2014) Superintelligence: Paths, Dangers, Strategies (Oxford University Press).
 For example, the following approach seems superior to what Google Flu Trends can offer: D. Brockmann and D. Helbing, The hidden geometry of complex, network-driven contagion phenomena. Science 342, 1337-1342 (2013).
 T. Preis, H.S. Moat, and H.E. Stanley, Quantifying trading behavior in financial markets using Google Trends. Scientific Reports 3: 1684 (2013).
 R.H. Thaler and C.R. Sunstein (2009) Nudge (Penguin Books).
 Süddeutsche (11.3.2015) Politik per Psychotrick, http://www.sueddeutsche.de/wirtschaft/verhaltensforschung-am-buerger-politik-per-psychotrick-1.2386755
 For example, many Big Data companies (even big ones) don’t make large profits and some are even making losses. Making big money often requires to bring a Big Data company to the stock market, or to be bought by another company.
 M. Gill and A. Spriggs: Assessing the impact of CCTV. Home Office Research, Development and Statistics Directorate (2005), https://www.cctvusergroup.com/downloads/file/Martin%20gill.pdf; see also BBC News (August 24, 2009) 1,000 cameras `solve one crime’, http://news.bbc.co.uk/2/hi/uk_news/england/london/8219022.stm
 Journalist’s Resource (November 6, 2014) The effectiveness of predictive policing: Lessons from a randomized controlled trial, http://journalistsresource.org/studies/government/ criminal-justice/predictive-policing-randomized-controlled-trial. ZEIT Online (29.3.2015) Predictive Policing – Noch hat niemand bewiesen, dass Data Mining der Polizei hilft, http://www.zeit.de/digital/datenschutz/2015-03/predictive-policing-software-polizei-precobs
 The Washington Post (January 12, 2014) NSA phone record collection does little to prevent terrorist attacks, group says, http://www.washingtonpost.com/world/national-security/nsa-phone-record-collection-does-little-to-prevent-terrorist-attacks-group-says/2014/01/12/8aa860aa-77dd-11e3-8963-b4b654bcc9b2_story.html?hpid=z4; see also http://securitydata.newamerica.net/nsa/analysis
 D.M. Lazer et al. The Parable of Google Flu: Traps in Big Data Analytics, Science 343, 1203-1205 (2014).
 D. Helbing (2015) Thinking Ahead, Chapter 10 (Springer, Berlin). See also
 This problem is related with the method of “geoscoring”, see http://www.kreditforum.net/kreditwuerdigkeit-und-geoscoring.html/
 The Silicon Valley is well-known for this kind of culture.
 A. Mazloumian et al. Global multi-level analysis of the ‘scientific food web’, Scientific Reports 3: 1167 (2013), http://www.nature.com/srep/2013/130130/srep01167/full/srep01167.html? message-global=remove
 J. Schmieder (2013) Mit einem Bein im Knast – Mein Versuch, ein Jahr lang gesetzestreu zu leben (Bertelsmann).
 A. Diekmann, W. Przepiorka, and H. Rauhut, Lifting the veil of ignorance: An experiment on the contagiousness of norm violations, preprint http://cess.nuff.ox.ac.uk/documents/DP2011/CESS_DP2011_004.pdf
 Note that super-intelligent machines may be seen as an implementation of the concept of the “wise king”. However, as I am saying elsewhere, this is not a suitable approach to govern complex societies (see also the draft chapters of my book on the Digital Society at http://www.ssrn.com and https://futurict.blogspot.com, particularly the chapter on the Complexity Time Bomb: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2502559). Combinatorial complexity must be answered by combinatorial, i.e. collective intelligence, and this needs personal digital assistants and suitable information platforms for coordination.
 Remember that it takes about 2 decades for a human to be ready for responsible, self-determined behavior. Before, however, he/she may do a lot of stupid things (and this may actually happen later, too).
 I. Kondor, S. Pafka, and G. Nagy, Noise sensitivity of portfolio selection under various risk measures, Journal of Banking & Finance 31(5), 1545-1573 (2007).
 In other places (http://futurict.blogspot.com/2014/10/crystal-ball-and-magic-wandthe.html), I have metaphorically compared these technologies with a “magic wand” (“Zauberstab”).
The problem with these technologies is: they are powerful, but if we don’t use them well, their use can end in disaster. A nice poem illustrating this is The Sourcerer’s Apprentice by Johann Wolfgang von Goethe: http://germanstories.vcu.edu/goethe/zauber_dual.html, http://www.rither.de/a/deutsch/goethe/der-zauberlehrling/
 For example, it recently became public that Facebook had run a huge experiment trying to manipulate people’s mood: http://www.theatlantic.com/technology/archive/2014/09/ facebooks-mood-manipulation-experiment-might-be-illegal/380717/
This created a big “shit storm”: http://www.wsj.com/articles/furor-erupts-over-facebook-experiment-on-users-1404085840. However, it was also attempted to influence people’s voting behavior: http://www.nzz.ch/international/kampf-um-den-glaesernen-waehler-1.18501656 OkCupid even tried to manipulate people’s private emotions: http://www.theguardian.com/technology/ 2014/jul/29/okcupid-experiment-human-beings-dating It is also being said that each of our Web searches now triggers about 200 experiments.
 J. Lorenz et al. How social influence can undermine the wisdom of crowd effect, Proceedings of the National Academy of Science of the USA 108 (22), 9020-9025 (2011); see also J. Surowiecki (2005) The Wisdom of Crowds (Anchor).
 M. Bloodgood and C. Callison-Burch, Using Mechanical Turk to build machine translation evaluation sets, http://www.cis.upenn.edu/~ccb/publications/using-mechanical-turk-to-build-machine-translation-evaluation-sets.pdf
 In an extreme case, this might even be a criminal act.
 Interestingly, for IBM Watson (the intelligent cognitive computer) to work well, it must be fed with non-biased rather than with self-consistent information, i.e. pre-selecting inputs to get rid of contradictory information reduces Watson’s performance.
 It seems, for example, that the attempts of the world’s superpower to extend its powers have rather weakened it: we are now living in a multi-polar world. Coercion works increasingly less. See the draft chapters of my book on the Digital Society at http://ssrn.com for more information.
 even though one never knows before what kinds of ideas and social mechanisms might become important in the future – innovation always starts with minorities
 C.A. Hidalgo et al. The product space conditions the development of nations, Science 317, 482-487 (2007). According to Jürgen Mimkes, economic progress (which goes along with an increase in complexity) also drives a transition from autocratic to democratic governance above a certain gross domestic product per capita. In China, this transition is expected to happen soon.
 This is the main reason why one should support pluralism.
 One might distinguish these into two types: dictatorships based on surveillance (“Big Brother”) and manipulatorships (“Big Manipulator”).
 As digital weapons, so-called D-weapons, are certainly not less dangerous than atomic, biological and chemical (ABC) weapons, they would require international regulation and control.
 D. Helbing and S. Balietti, From social data mining to forecasting socio-economic crises, Eur. Phys. J Special Topics 195, 3-68 (2011); see also http://www.zeit.de/digital/datenschutz/2015-03/datenschutzverordnung-zweckbindung-datensparsamkeit; http://www.google.com/patents/US8909546
 Note that the scientific field of complexity science has a large fundus of knowledge how to reach globally coordinated results based on local interactions.
 After all, humans have to register, too.
 E. Pariser (2012) The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think (Penguin).
 Some problems are so hard that no government and no company in the world have solved them (e.g. how to counter climate change). Large multi-national companies are often surprisingly weak in delivering fundamental innovations (probably because they are too controlling). That’s why they keep buying small and medium-sized companies to compensate for this problem.
 Similar problems are known for software products that are used by billions of people: a single software bug can cause large-scale problems – and the worrying vulnerability to cyber attacks is further increasing.
 In fact, to avoid mistakes, the more we are flooded with information the more must we be able to rely on it, as we have increasingly less time to judge its quality.
 This could end up in a way of organizing our society that one could characterize as “Big Manipulator” (to be distinguished from “Big Brother”).
 The following recent newspaper articles support this conclusion:
http://www.zeit.de/politik/ausland/2015-03/china-wachstum-fuenf-vor-acht , http://bazonline.ch/wirtschaft/konjunktur/China-uebernimmt-die-rote-Laterne/story/ 20869017 , http://www.nzz.ch/international/asien-und-pazifik/singapurer-zeitrechnung-ohne-lee-kuan-yew-1.18510938.
In fact, based on a statistical analysis of Jürgen Mimkes and own observations, I expect that China will now undergo a major transformation towards a more democratic state in the coming years. First signs of instability of the current autocratic system are visible already, such as the increased attempts to control information flows.
 D. Helbing, Responding to complexity in socio-economic systems: How to build a smart and resilient society? Preprint http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2583391
 D. Helbing, Creating (“Making”) a Planetary Nervous System as Citizen Web,
 L.M.A. Bettencourt et al. Growth, innovation, scaling, and the pace of life in cities, Proceedings of the National Academy of Sciences of the USA 104, 7301-7306 (2007).
 See D. Helbing, Globally networked risks and how to respond. Nature 497, 51-59 (2013). Due to the problem of the Complexity Time Bomb (http://papers.ssrn.com/sol3/papers.cfm? abstract_id=2502559), we must either decentralize our world, or it will most likely fragment, i.e. break into pieces, sooner or later.
 Having a greater haystack does not make it easier to find a needle in it.
 This is particularly well-known for the problem of ambiguity. For example, a lot of jokes are based on this principle.
 M. Maes and D. Helbing, Noise can improve social macro-predictions when micro-theories fail, preprint.
 We know this also from so-called “phantom traffic jams”, which appear with no reason, when the car density exceeds a certain critical value beyond which traffic flow becomes unstable. Such phantom traffic jams could not be predicted at all by knowing all drivers thoughts and feelings in detail. However, they can be understood for example with macro-level models that do not require micro-level knowledge. These models also show how traffic congestion can be avoided: by using driver assistance systems that change the interactions between cars, using real-time information about local traffic conditions. Note that this is a distributed control strategy.
 Assume one knows the psychology of two persons, but then they accidentally meet and fall in love with each other. This incident will change their entire lives, and in some cases it will change history too (think of Julius Caesar and Cleopatra, for example, but there are many similar cases). A similar problem is known from car electronics: even if all electronic components have been well tested, their interaction often produces unexpected outcomes. In complex systems, such unexpected, “emergent” system properties are quite common.
 In case of cascade effects, a local problem will cause other problems before the system recovers from the initial disruption. Those problems trigger further ones, etc. Even hundreds of policemen could not avoid phantom traffic jams from happening, and in the past even large numbers of security forces have often failed to prevent crowd disasters (they have sometimes even triggered or deteriorated them while trying to avoid them), see D. Helbing and P. Mukerji, Crowd disasters as systemic failures: Analysis of the Love Parade disaster, EPJ Data Science 1:7 (2012).
 I am personally convinced that the level of randomness and unpredictability in a society is relatively high, because it creates a lot of personal and societal benefits, such as creativity and innovation. Also think of the success principle of serendipity.
 D. Helbing et al. FuturICT: Participatory computing to understand and manage our complex world in a more sustainable and resilient way. Eur. Phys. J. Special Topics 214, 11-39 (2012).
 As we know, intellectual discourse can be a very effective way of producing new insights and knowledge.
 Due to the data deluge, the existing amounts of data increasingly exceed the processing capacities, which creates a “flashlight effect”: while we might look at anything, we need to decide what data to look at, and other data will be ignored. As a consequence, we often overlook things that matter. While the world was busy fighting terrorism in the aftermath of September 11, it did not see the financial crisis coming. While it was focused on this, it did not see the Arab Spring coming. The crisis in Ukraine came also as a surprise, and the response to Ebola came half a year late. Of course, the possibility or likelihood of all these events was reflected by some existing data, but we failed to pay attention to them.
 The classical telematics solutions based on a control center approach haven’t improved traffic much. Today’s solutions to improve traffic flows are mainly based on distributed control approaches: self-driving cars, intervehicle communication, car-to-infrastructure communication etc.
 This approach corresponds exactly how Big Data are used at the elementary particle accelerator CERN; 99.9% of measured data are deleted immediately. One only keeps data that are required to answer a certain question, e.g. to validate or falsify implications of a certain theory.
 J. van den Hoven et al. FuturICT – The road towards ethical ICT, Eur. Phys. J. Special Topics 214 , 153-181 (2012).
 This probably requires different levels of access depending on qualification, reputation, and merit.
 J. Rifkin (2013) The Third Industrial Revolution (Palgrave Macmillan Trade); J. Rifkin (2014) The Zero Marginal Cost Society (Palgrave Macmillan Trade).
 Government 3.0 initiative of the South Korean government, http://www.negst.com.ng/documents/Governing_through_Networks/3-icegov2013_submission_19.pdf
Two truths approach each other. One comes from inside, the other from outside, and where they meet we have a chance to catch sight of ourselves. (From “Preludes”)
Tomas Tranströmer, who was awarded this year’s Nobel Prize in Literature, has for years now been one of my ports of refuge. The books of his poetry on my shelves never remain unopened for long. I turn to him when I wish to come as close as possible to what cannot be said. This past decade was full of dark years, and I returned again and again to poets. They kept watch over me and, to adopt a phrase of Tranströmer’s, I survived on milk stolen from their cosmos.
I read Walcott, Bishop, Ondaatje, Szymborska, Bonta, and a dozen other marvelous writers, but above all I read Heaney and Tranströmer who, in different ways, fused the biggest questions with personal experience.
To read Tranströmer—the best times are at night, in silence, and alone—is to surrender to the far-fetched. It is to climb out of bed and listen to what the house is saying, and to how the wind outside responds. Each of his readers reads him as a personal secret. For this reason it is strange to see this master of solitude being celebrated in the streets or showing up as a trending topic on Twitter and a best-seller on Amazon. He usually dwells in quieter precincts.
Tranströmer’s poems owe something to Japanese tradition, and early in his career he wrote haiku. Reading him, one is also reminded of American poets like Charles Simic (for his surrealism) and Jim Harrison, Gary Snyder, and W. S. Merwin (for their plain speech and koan-like wisdom). But Tranströmer casts a spell all his own, and in fact the strongest associations he brings to my mind are the music of Arvo Pärt and the photography of Saul Leiter.
I swim out in a trance
on the glittering dark water.
A steady note of a tuba comes in.
It’s a friend’s voice: “Take up your grave and walk.” (From “Two Cities”)
His poems contain a luminous simplicity that expands until it pushes your ego out of the nest, and there you are, alone with Truth. In a Tranströmer poem, you inhabit space differently; a body becomes a thing, a mind floats, things have lives, and even non-things, even concepts, are alive. His memoir, “Memories Look At Me,” inspired me to title my weekly column for the Nigerian newspaper NEXT (for the year the column ran) “Words Follow Me.” There is much following in Tranströmer, much watching, from a distance and from close by, and the trees, pasts, houses, spaces, silences, and fields all take on invigilative personae. There are many dreams.
I dreamt that I had sketched piano keys out
on the kitchen table. I played on them, without a sound.
Neighbors came by to listen. (From “Grief Gondola #2”)
Tranströmer is well translated into English (even if he wasn’t, until this week, a best-seller), and there are versions by May Swenson, Robin Fulton, Robin Robertson, and others. My favorite book of the poems is “The Half-Finished Heaven,” a selection translated by Robert Bly. Bly’s language is so clean and direct it seems to bypass language itself. This was the volume I turned to the most during the horrors of the Bush and Cheney years. Even though around the same time my own belief in God had faded away, I found that I needed to somehow retain belief in a cloud of witnesses. I had strayed away from religious dogma, but my hunger for miracle speech had not abated. Tranströmer’s mysterious poems, hovering on the edge of the unsayable, met me right at this point of need.
I open the first door. It is a large sunlit room. A heavy car passes outside and makes the china quiver.
I open door number two. Friends! You drank some darkness and became visible.
Door number three. A narrow hotel room. View on an alley. One lamppost shines on the asphalt. Experience, its beautiful slag. (From “Elegy”)
And, from “The Scattered Congregation,” which is in five short parts, these lines:
We got ready and showed our home. The visitor thought: you live well. The slum must be inside you.
Nicodemus the sleepwalker is on his way to the Address. Who’s got the Address? Don’t know. But that’s where we’re going.
There’s a kind of helplessness in many of the poems, the sense of being pulled along by something irresistible and invisible. There are moments of tart social commentary, a sense of justice wounded (“the slum must be inside you”—for many years, Tranströmer worked as a psychologist at an institution for juvenile offenders). There is also in the poems a kind of motionlessness that is indistinguishable from terrific speed, in the same way Arvo Pärt’s music can sound fast and slow at the same time. It’s a good thing I’m unembarrassable about influence, because I realize now how many of Tranströmer’s concepts I have hidden away in my own work. When I’m asked in interviews what my favorite thing about New York, I usually answer with a line lifted from “Schubertiana”: “Outside New York, a high place where with one glance you take in the houses where eight million human beings live.”
The images with which Tranströmer charges his poems bring to mind the concept of “acheiropoieta,” “making without hands”; in Byzantine art, acheiropoeitic images were those believed to have come miraculously into being without a painter’s intervention. The Shroud of Turin and the Veil of Veronica are the most famous examples. These were images registered by direct contact, and they were usually images of the Holy Face of Christ. (Albrecht Dürer, in his immodest way, was alluding to such images when he painted his deliriously detailed full-frontal self-portrait of 1500.) I feel Tranströmer’s use of imagery is like this, and like contact printing, in which a photograph is made directly from a film negative or film positive. There is little elaborate construction evident; rather, the sense is of the sudden arrival of what was already there, as when a whale comes up for air: massive, exhilarating, and evanescent.
The satisfaction, the pleasure, the comfort one takes in these poems comes from the way they seem to have preëxisted us. Or perhaps, to put it another way, the magic lies in their ability to present aspects of our selves long buried under manners, culture, and language. The poems remember us and, if we are perfectly still, give us a chance to catch sight of ourselves.
No matter what you’re field, if you seek to make a splash, the ability to think outside the box is essential.
Canva’s Andrew Tate found forty inspiring reads to put you in a state of mind to get ahead through creative thinking, whether your field is business or the arts.
01. Thinking Fast and Slow – Daniel Kahneman
Lazy thinking is the bane of creativity. In Thinking Fast and Slow, Nobel Prize winning economist Daniel Kahneman talks about how there are really two different thinking systems within the brain. System 1 lets us make fast decisions based on intuition, while system 2 is more deliberative and slower. Sometimes quick, emotional decisions are great for creativity but often we need to take a set back and reassess, bringing the second system into play. Kahneman explores different exercises you can do to make sure that for any creative decision you make, or any creative thinking you do, you are accessing the correct part of your brain.
02. Making Ideas Happen: Overcoming the Obstacles Between Vision & Reality – Scott Belsky
Throughout this series on creativity, we have always said that tenacity and determination are just as important to creativity as that initial bright spark. In Making Ideas Happen, Scott Belsky takes this a step further and gives you concrete strategies for realizing your idea and developing the skills to make them happen time and time again.
03. Things I have learned in my life so far – Stefan Sagmeister
Stefan Sagmeister is one of the world’s foremost graphic designers. One of his many beautiful books, Things I Have Learned in My Life So Far incorporates material from his exhibitions alongside his maxims for great design and true creativity.
04. How We Got to Now: Six Innovations That Made the Modern World – Steven Johnson
If you learn one thing from How We Got to Now, it would be that we have no idea where we are going, but genius, fate, and serendipity drive us there. Steven Johnson tells wonderful tales of creative genius and the curious connections between one breakthrough and another.
Full story at Canva.
Kickstart your creativity.
Graphics credit: Canva
Posted by Kate Rinsema
Before Snowden, we saw the distribution of video and cables from the US State Department, leaked by Chelsea (then Bradley) Manning. That was an extraordinary occurrence, but one of such strangeness—the scale, the involvement of Julian Assange, Manning’s own military history—that it was impossible to know which aspects of that leak were singular occurrences and which indicated larger patterns.
Snowden, a far more knowledgeable and confident source than Manning, and holding far more significant material, has made some of those patterns visible. The leak of the NSA documents provides much information about political journalism in a networked age. The most important patterns are these: Individual sources have improved leverage, transnational news networks are becoming both essential and normal, and digital data is undermining older patterns of journalistic reputation.
Taken together, these changes disrupt the unstated bargain between governments and news outlets. In all but the most extraordinary cases, national news has been published in national outlets, with the borders of reporting, national interest, and national jurisdiction all lining up. After Snowden, that pattern is shredded. As journalistic outlets become more networked, the familiar geographic link between sources, reporters, publications, and subjects will weaken.
The open issue for the world’s investigative journalists is how far the world’s governments will go to restrict these networks. The threat of relatively unconstrained reporting of secrets has prompted extra-judicial attacks on publishing outlets, as with suspension of credit card payments to Wikileaks following Congressional complaint. (Full disclosure: I am a supporter of Wikileaks, both as a philosophical matter and as a donor during the period in which its finances were first under attack. I am also a donor to ProPublica and The Guardian, in large part because of their role in preventing the US from limiting publication of the Snowden revelations.)
We are quite accustomed to autocratic governments like Saudi Arabia and Egypt hampering journalism, but with the rising threat of real transnational reporting, we are seeing authoritarian leaders in South Korea and Turkey push for control of media. Even governments with a constitutional commitment to freedom of speech and of the press, such as the UK and US, have attempted to create de facto restrictions on publishing where the law allows them no direct relief. The essential question is how journalists and publications can strengthen their ability to report important news in an age of increasing interference.
There have always been leaks and leakers. Any discussion of journalism in the US will eventually come around to Watergate and Deep Throat, the code name for Mark Felt, associate director of the FBI and leaker-in-chief. Likewise, digital data made leaking easier long before Snowden; the site Cryptome.org was set up in 1996 to do much of what Wikileaks also does, and Wikileaks itself was roiling national politics long before Manning ever showed up, as with its accusations of corruption by Daniel arap Moi in 2007.
In large bureaucracies, the scarcest resource is not access to data, but individual bravery.
The Manning case, though, was unusual: a massive leak from inside a secured network run by the richest country on earth, one seemingly well equipped to guard its own secrets. It concerned the United States, the world’s sole superpower and most important global actor. And Manning, visibly upset with US conduct and on disciplinary probation, was only allowed continued access because the wars in Iraq and Afghanistan increased the need for technical talent while decreasing the supply.
The cumulative effect was to make the revelations of 2010 seem as if they might be a one-off, rather than a new pattern. Many people commenting on the Manning leak believed that nothing of that magnitude would happen again. This assumption rested on the conviction that national governments and large firms would quickly find ways to limit access to their secrets by insiders who might be willing to leak that information.
The Snowden leak shows us that this organizational adaptation did not happen. The National Security Agency is among the best-funded and most competent group of electronic spies in the world. It had three years after the example set by Manning to limit possible leaks, and it failed, spectacularly. Not only did the agency lose a huge trove of data, officials could not initially identify who had leaked it and, if they are to be believed, still cannot use their own internal controls to discover which documents Snowden had in his possession when he left.
If the NSA cannot secure its own documents, what hope is there for less competent institutions? All large institutions with secrets now face a serious threat to their current practices in making use of digital data (exactly as Assange predicted they would back in 2006.) The value of freeing information from physical containers is that more people can see and use it simultaneously, at lower cost. This is a boon for almost every possible use of this data, but it is in tension with any desire to keep it secret.
This tension is fundamental. Sharing data widely is the principal source of risk to its secrecy, but making secret data harder to share also makes it harder to use, and thus less valuable. This dilemma grows more severe the more is to be kept secret, because large stores of data require increasingly automated processes of indexing and linking, which in turn require reducing barriers between data stores, so as to “connect the dots.” And all this hoped-for dot-connecting requires scores of junior analysts and administrators just to manage basic operations.
From a bureaucratic point of view, there are three obvious solutions to this problem: immediate restrictions on system access for anyone skeptical about the mission; dramatic limits on the number of junior employees given access; and total internal surveillance. Acting on these solutions would indeed lower the number of leaks, but would leave an organization trying to use vast datasets with a skeleton crew of paranoid yes-men, hardly a recipe for effective organizational action.
Some bureaucracies will indeed subject themselves to dramatically increased degrees of internal paranoia over who is to have access to which pieces of data, but most won’t, and the ones that do will find that it hampers their effectiveness. Just as people write down their nominally secret passwords on Post-Its, organizations will re-open their databases to competent administrators and entry-level analysts, because they will have to if they want to make use of the information.
Bureaucracies are permanently vulnerable to a revolt of the clerks. The increased value of digital data comes almost entirely from its improved shareability, and if data is more shareable, there is a greater risk that it will be shared. In a digital world, it no longer takes a senior figure like Mark Felt to leak; it can be anyone who has access to the data. For all Snowden’s genius, he operated far from the levers of power within his organization.
What Snowden (and Manning) show is that in large bureaucracies, the scarcest resource is not access to data, but individual bravery. Brave sources are rare but not vanishingly so; a brave source can accomplish the delivery of information on a scale unimaginable even a decade ago.
One curiosity of the half-millennium since Gutenberg, and especially of those hundred years in which the telegraph, photograph, phonograph, telephone, cinema, radio, and television all appeared, is that for all the innovation, media remained relentlessly national, constrained by local economics and politics.
For physical media—books and newspapers, letters and photographs—international tariffs priced out much border crossing. The cost of building out the infrastructure for the telegraph and later the telephone had the same effect. Even radio and TV, transported as pure energy, first appeared when broadcast engineering was barely adequate to cover a whole city, much less cross national boundaries. Even border-spanning news organizations such as the BBC had to set themselves up country by country.
Through the end of the 20th century, leaks of any importance would be leaked to, and published by, the press in that nation. Profumo was reported in England, Watergate in the US, and so on. Even as entertainment became more global, the news (especially political news) remained nationally sourced, nationally published, and nationally consumed.
Here, too, there are historical precedents before Snowden. It is no exaggeration to say that the current pope got his mitre in part because of The Boston Globe’s coverage of child sexual abuse by priests. The Globe published its series on the horror of Father John Geogahn’s crimes in 2002, just far enough into the internet’s existence for the story to spread outside the US, sparking international scrutiny. Similarly, The Guardian’s correspondent in South Africa told me later in that decade that he had regarded his job as reporting on South Africa to the UK, but had recently discovered that his South African audience was now larger than his British one. The Guardian website had become a platform that allowed South Africans to read about themselves.
Those were 21st-century equivalents of the first English bibles being printed in Antwerp: a way of placing a single publisher out of the reach of the target nation’s government. What’s different today is the “multiple publishers” strategy that Assange improvised and Snowden extended, akin to insisting that every synagogue have two Torahs or every database store information in multiple locations. Having more than one copy of the leaked data and more than one publication working on the story makes the leak more effective.
After Manning, it was easy to believe that organizations like Wikileaks were the hinge on which any such leak would depend. In the aftermath of the State Department leak, Assange rather than Manning was presented as the central figure, not least because he was charismatic, brilliant, and odd—catnip for the press. Given his outsized presence, it was easy to believe that there had to be some organization between the leaker and the press to make any system of international distribution work.
After Snowden, we see how much power now lies with the leaker. Snowden demonstrated that the principle value Wikileaks had provided was not in receiving the source materials, but in coordinating a multi-national network of publishers. Snowden himself took on this function, contacting Laura Poitras and Glenn Greenwald directly.
The potential for a global news network has existed for a few decades, but its practical implementation is unfolding in ours. This normalization of trans-national reporting networks reduces the risk of what engineers call a “single point of failure.” As we saw with Bill Keller’s craven decision not to publish James Risen’s work on the NSA in 2004, neither the importance of a piece of political news nor its existence as a scoop is enough to guarantee that that it will actually see the light of day. The global part is driven by the need for leakers to move their materials outside national jurisdictions. The network part is driven by the advantages of having more than one organization with a stake in publication.
The geographic spread of the information means that there is no one legal regime in which injunctions on publication can be served, while the balance of competition and collaboration between organizations removes the risk of an editor unilaterally killing newsworthy coverage. Now and for the foreseeable future, the likelihood that a leak will appear in a single publication, in the country in which it is most relevant, will be in inverse proportion to the leak’s importance.
These two changes—the heightened leverage of sources and the normalization of trans-national news networks—are threatening even to democratic states with constitutional protections for the press (whether de jure, as in the US, or de facto, as in the UK). Those governments always had significant extra-legal mechanisms for controlling leaks at their disposal, but empowered sources and transnational networks threaten those mechanisms.
This containment of journalistic outlets inside national borders resembled a version of the Prisoner’s Dilemma, a social science thought-experiment in which each of two people is given a strong incentive to pursue significant short-term gain at the other’s expense. At the same time, each participant has a weaker but longer-lasting incentive to create small but mutual, longer-term value. The key to the Prisoner’s Dilemma is what Robert Axelrod, its original theorist, calls “The Shadow of the Future.” The shadow of the future is what keeps people cooperating over the long term—in friendships, businesses, marriages, and other relationships—despite the temptations of short-term defection of all sorts.
News outlets and governments exist in a version of the Prisoner’s Dilemma. Publications have a short-term incentive to publish everything they know, but a long-term incentive to retain access to sources inside the government. Governments have a short-term incentive to prevent news outlets from discovering or publishing anything, but a long-term incentive to be able to bargain for softening, delaying, or killing the stories they really don’t want to see in public (as happened with Keller.)
So long as both institutions have a long time horizon, neither side gets all of what it wants, but neither side suffers the worst of what it fears, and the relationship bumps along, year after year. (There have been a few counter-examples: I.F. Stone did all his work for his weekly newsletter by researching government data, never interviewing politicians or civil servants. He reasoned that the quid pro quo of increased access but reduced ability to publish would end up creating more restrictions than it was worth.)
The shadow of the future has meant that even in nations with significant legal protections for free speech, the press’ behavior is considerably constrained by mutual long-term bargains with the government. Empowered leakers and transnational publication networks disrupt this relationship. A leaker with a single issue—the world should see what the State Department or the NSA are doing, to take the two obvious examples—has no regard for the shadow of the future, while publications outside the US will be not be constrained by legal challenges, threatened loss of insider access, or appeals to patriotism.
There is one final pattern that the Snowden leaks make visible. In the middle of the 20th century, mainstream news both relied on and produced cultural consensus. With the erosion of the belief that mainstream media speaks to and for the general public in an unbiased way, the presumed lack of objectivity of any given news organization has become a central concern. Alongside this change, however, we are witnessing the spread of a new form of objective reporting: reporting done by objects.
There are, of course, precedents to object-based reporting; tape-recorded conversations in Nixon’s White House ended his presidency, as his foul-mouthed, petty vindictiveness became obvious to all. The heroic work of The Washington Post is the stuff of journalistic lore, but the mechanical nature of the tape recorders actually made them the most trusted reporters on the story.
As the quality and range of reporting by objects has increased, it has had the curious effect of making the partisan nature of both reporters and publications a less serious issue. If Mother Jones, predictably liberal, had only been able to report Mitt Romney’s remarks about the 47 percent because a bartender heard and repeated them, the story would have circulated among the magazine’s left-leaning readers, but no farther (as with most stories in that publication). That bartender recorded the conversation, however, and the fact of the recording meant Mother Jones’ reputation didn’t became a serious point of contention. Because people only had to trust the recording, not the publication, the veracity of the remarks was never seriously challenged.
This pattern of objective recording trumping partisan reputation is relatively new. Indeed, in the 47 percent story, otherwise sophisticated political observers like Jonathan Chait predicted that Romney’s remarks would have little real effect, because they didn’t understand that the existence of a recording simply neutralized much of the “out of context” and “he said, she said” posturing that usually follows. Mother Jones no longer had to be mainstream to create a mainstream story, provided its accuracy was vouched for by the bartender’s camera.
In Snowden’s case, many of the early revelations about the NSA, and especially the wholesale copying of data flowing through various telecom networks, had already been reported, but that reporting had surprisingly little effect. The facts of the matter weren’t enough to alter the public conversation. What did have an effect was seeing the documents themselves.
All inter-office PowerPoint decks are bad, but no one does them as poorly as the federal government. The slides describing the Prism program were unfakeably ugly, visibly made by insiders talking to insiders. As with Romney’s remark about the 47 percent, the NSA never made a serious attempt to deny the accuracy of the leak or cast aspersions on the source, the reporters, or the publications.
Like the Nixon tapes and the Romney video, the existence of the Snowden documents also gave Glenn Greenwald, one of the most liberal journalists working today, a bulwark against charges of partisan fabrication. Indeed, he didn’t just publish his work in The Guardian, a liberal UK-based paper; he took the data with him to a startup, The Intercept, believing (correctly) that the documents themselves would act as a kind of portable and surrogate reputation, disarming attempts by the government or partisans elsewhere to deny the accuracy of present or future stories generated from those documents.
In past leaks—the Pentagon Papers, Watergate—it took the combined force of leaked information and a mainstream publication to get the public’s attention, and mainstream publications were, almost by definition, the publications most invested in the shadow of the future. Meanwhile, more partisan publications of the 20th century were regarded with suspicion; even accurate reporting that appeared in them rarely went beyond niche audiences. After Snowden, the world’s governments are often denied even this defense. This creates a novel set of actors: an international partisan press that will be trusted by the broad public, so long as it traffics in documents that announce their own authenticity.
There will be more Snowden-style leaks, because the number of people with access to vital information has proliferated and cannot easily be reduced. Even one-in-a-million odds of a leak start to look likely if a million people have access, as was the case with the State Department’s cables. So what should journalists and publications do to maximize their ability to report newsworthy stories and minimize government interference? Three broad skills are required.
First and most importantly, reporters have to get good at encrypted communication. (It would be useful if news organizations began encrypting even routine communication, to avoid signaling to the governments they cover when something particularly important is happening, and to provide cover to sensitive sources.) Encryption is not an IT function; individual reporters have to become comfortable sending and receiving encrypted email, at a minimum. And, as was the case with both Manning and Snowden, it’s important to recognize—and to get the source to recognize—that encryption is no guarantee that a source won’t eventually be identified. It is a tool for buying time, not guaranteeing anonymity.
Second, journalists and institutions in contact with leakers need to have a plan for involving other journalists or institutions located in a different jurisdiction. While the leaks that get the most attention are national scale, we can expect additional leaks from inside businesses and local governments. It may be valuable to have a New Jersey newspaper holding vital documents about a sheriff in Colorado, to make sure the Colorado paper can’t be successfully pressured to withhold them. (This “doomsday switch” scenario seems to have been used by John McAfee, in his fight with the government of Belize, an indication that the pattern extends beyond journalism.)
And third, both journalists and publications should figure out to whom they might be useful as a third-party recipient of some other journalist’s or publication’s secrets. In moments of crisis (and important leaks tend to precipitate crises), those in need of backup will turn to people they already trust. If you are a journalist, editor, or publisher, ask yourself which other publications, anywhere in the world, would turn to you if they needed backup?
These leaks are far more threatening to secretive organizations when perpetrated by clerks instead of chiefs and distributed outside the bounds of local jurisdiction; they are also harder to question or deny. We are already seeing the world’s democracies behave like autocratic governments in the face of this threat; the Obama administration has become the greatest enemy of press freedom in a generation (a judgment recently made by James Risen, the man whose NSA story Bill Keller quashed).
Leaks will still be relatively rare. But because they can happen at large scale, across transnational networks, and provide documents the public finds trustworthy, they allow publications some relief from extra-legal constraints on publishing material in the public interest.
Brave sources are going to require brave journalists and brave publications. They are also going to require lots of technical expertise on encryption among reporters and lots of cooperation among sometime competitors. The job of publications is to air information of public concern, and that is increasingly going to mean taking steps to ensure that no one government can prevent publication. Nothing says “We won’t back down” like burning your boats on the beach.
Clay Shirky has a joint appointment at New York University, as a Distinguished Writer in Residence at the Arthur L. Carter Journalism Institute and as an assistant arts professor in the Interactive Telecommunications Program. He blogs at shirky.com/weblog. This story was published in the March/April 2015 issue of CJR with the headline, “Revolt of the clerks.”
Originally posted on One Hundred Pages:
So you might think that if Stephen King – the guy who wrote such horror classics like Carrie and The Stand – were to rattle off his top ten favorite books, it would feature works by the likes of Edgar Allan Poe, H. P. Lovecraft or maybe J. R. R. Tolkien — authors who have, like King, created enduring dark, Gothic worlds filled with supernatural events and malevolent forces. But you’d be wrong. Author J. Peder Zane asked scores of writers about their favorite novels for his 2007 book The Top Ten: Writers Pick Their Favorite Books. The list King submitted in reply appears below. We’ve added links to the texts that you can read for free online, taken from our collection, 700 Free eBooks for iPad, Kindle & Other Devices.
1. The Golden Argosy, The Most Celebrated Short Stories in the English Language – edited by…
View original 236 more words
Philosophizing Without a Purpose
Fifteen minutes long, because you're in a hurry, and we're not that smart.
"The Future is not inherited, it is Achieved" JFK
‘We‘re splinters & mosaics; not, as they used to hold, immaculate, monolithic, consistent wholes’, wrote Virginia Woolf in her diary. Are we really just a collection of “splinters and mosaics”? Welcome to my collection!!!