Consider the totality of all human communication, opinion, knowledge, and the mechanisms by which these things change over time. This totality is an example of a hyperobject, but we don't have a good term to describe it. Some people call it the information environment or the marketplace of ideas, but to use such terms is to assume that environmental or economic metaphors are appropriate. For lack of a better term, I will refer to it as public discourse.
This post is an inventory of metaphors I've encountered for public discourse.
Metaphors are useful in much the same way that taxonomies are useful. Noticing sparsely populated sections of a well-constructed taxonomy can prompt us to perceive and articulate phenomena that would otherwise by subject to hypocognition. In a similar way, exploration of the unused parts of existing metaphors can guide us towards new conceptual frameworks on which to base future action.
For example, if public discourse is a marketplace, what are the goods or services being traded? What is the currency? What types of market failure are possible? I've done some of this conceptual algebra in the course of compiling this inventory, but more could be done. In particular, I think it is worth thinking through what regulatory frameworks are implied by different metaphors.
This post is a working document, and will be updated and revised as I encounter new metaphors and my thinking evolves. I try to make each system of metaphors somewhat coherent, but this is a work-in-progress. Throughout, I've included (in blue) suggestions for how the metaphors might be plausibly extended.
Please send feedback or suggestions to .
Economic
Those who have new ideas, or advocate for existing ones, are ‘producers’. Those who agree with an idea are ‘consumers’.
Idioms“marketplace of ideas”
- Words don’t have standardised value. Their meaning depends on who says them and in what context.
- Information has three main properties complicate its role in market transactions. (1) It is an experience good—you must experience an information good before you know what it is. (2) High returns to scale—information has a high fixed cost of production but low marginal costs of reproduction. (3) Often public goods—information goods are typically non-rivalrous and sometimes non-excludable. See Hal Varian, Markets for Information Goods (1998)
We are afraid to put men to live and trade each on his own private stock of reason; because we suspect that this stock in each man is small, and that the individuals would do better to avail themselves of the general bank and capital of nations and of ages.
Jonathan Stray, in response to critcisms of the engagement-optimising behaviour of online platforms, proposes we think of attention as a something like profit. No media entity would be sustainable if they do not continue to attract attention. But there are downsides to optimising for attention to the exclusion of other considerations.
Idioms“paying attention” / “time well spent” / “liar’s dividend”
What the attention economy takes for granted is the quality of attention, because like all modern capitalist systems, it imagines its currency as uniform and interchangeable. “Units” of attention are assumed undifferentiated and uncritical. To give a particularly bleak yet useful example, if I’m forced to watch an ad, the company doesn’t necessarily know how I am watching the ad. I may indeed be watching it very carefully but like a practitioner of aikido who seeks to better understand her enemy–or for that matter, like Thomas Merton observing the corruption of the world from his hermitage. My “participation” may be disingenuous, like Diogenes rolling his barrel industriously up and down the hill to appear productive. As a precursor to action, these drills and formations of attention within the mind represent a primary space of volition.
Thinking about the long term and network effects of fact-checking gives us more reasons for optimism about the practice. Yes, individual lies sometimes often get vastly more attention than corrections, but take a network-level perspective: fact checking seeks to inject additional signal which helps, more often than not, truth to be sorted from untruth. In a network, not everybody needs to read a fact check for the true story to become widely accepted, and those who are unreliable to be discredited. Over time, quality as an information source can show through, despite the vagaries of other factors.
Maybe… the same algorithms that presently identify popular messages and promote them could have the opposite effect, like those circuit breakers in stock exchanges. They could be wired to the brakes instead of the gas.
Those with attention to give are ‘producers’ of attention, and those who want other people’s attention are ‘consumers’ of attention.
Another metaphor is that of politics as a marketplace of rage, as explicated by Peter Sloterdijk. For now I am classifying that metaphor as a subset of this one, with rage being a particular variety of attention. At some point it may make sense to separate it out as a distinct metaphorical framework.
Idioms“attention economy”
How do individual grievances become streamlined into a collective expression of dissidence, political opposition, and aggregate supply of discontent? This socio-affective landscape functions very much like a traditional banking system in which rage replaces money and becomes the main political currency. …
Political parties and movements define a non-monetary banking system where rage banks operate as collection points of affects; they facilitate transactions with the rage of others in the same way monetary banks operate with the money of their customers. “They provide liaison between rage capacities and a desire for dignity. Their contract is based on a promise to their clients to disburse a return in the form of increased self-respect and a more powerful grasp of the future, provided the clients refrain from independent utilization of their rage. By doing this, they relieve their clients of the difficulty of having to take their own initiative, while nevertheless promising thymotic gains.”
See: Craig Mod on attention accounting.
Idioms“attention mongering” / “attention philanthropy”
Criticisms- Promotes a left-brained conception of attention-as-resource at the expense of a more right-brained attention-as-experience. (source)
- By conceiving of attention-as-resource, it invites its commodification and exploitation. (source)
- More broadly, promotes a view of attention as homogeneous and interchangeable when in reality there are many different varieties of attention. (source)
In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.
This is the attention economy, built upon the premise becoming conviction, becoming fact, that human attention is productive of value. How has it happened that whether conceived of as informal workers, content providers, gamers, consumers, prosumers, or audiences, we, the people of Earth, still have something that corporations want? Like clean air, attention is something that once could be had for free but is now being encroached upon as the next and perhaps final frontier. Attention is now a commodity, and a special kind of commodity at that.
…conceiving of attention as a resource misses the fact that attention is not just useful. It’s more fundamental than that: attention is what joins us with the outside world. ‘Instrumentally’ attending is important, sure. But we also have the capacity to attend in a more ‘exploratory’ way: to be truly open to whatever we find before us, without any particular agenda.
Attention discourse proceeds under the sign of scarcity. It treats attention as a resource, and, by doing so, maybe it has given up the game. To speak about attention as a resource is to grant and even encourage its commodification. If attention is scarce, then a competitive attention economy flows inevitably from it. In other words, to think of attention as a resource is already to invite the possibility that it may be extracted.
(of people’s attention)
- Bank notes are produced with anti-forgery mechanisms where an institution makes it hard to create the notes and provides an easily accessible way for individuals and businesses to verify them. Can we create similar content authenticity mechanisms?
Potential analogies that could shape responses could include: Banknotes and anti-forgery? In this case, an institution makes it hard to create the notes and provides an easily accessible way for individuals and businesses to verify them.
The ‘money’ is the benefit people think they will get from paying attention to something.
Here are some characteristics of pseudo-events which make them overshadow spontaneous events: pseudo-events are more dramatic … are easier to disseminate and to make vivid … can be repeated at will … cost money to create … are more intelligible and hence more reassuring … are more sociable, conversable, and more convenient to witness … [and] spawn other pseudo-events in geometric progression. … By this new Gresham’s law of American public life, counterfeit happenings tend to drive spontaneous happenings out of circulation.
Along with Malthus’s law of information, I may as well coin a Gresham’s Law of Information. Gresham said that bad money drives out good. Well, bad information crowds out good. Cheap, low quality information on the Internet can cause problems for providers of high-quality information.
The Encyclopedia Brittanica offered an Internet edition to libraries with a site license subscription price of several thousand dollars. Microsoft’s Encarta retails for $49 for a CD ROM. Encarta’s doing fine; but Brittanica is in serious trouble. Brittanica is now offering a home subscription for $150 per year, and a home CD version for $70, but even this may be too high.
So perhaps low-quality information really does drive out good. Maybe… but Gresham’s law really should be restated—it’s not that bad money crowds out good, but that bad money sells at a discount. So bad information should sell at a discount. Good information—relevant, timely, high-quality, focussed, and useful information—like the Britannica—should sell at a premium. And this brings me back to the Better Bit Bureaus. The critical problem for the commercial providers of content is to find a way to convince the user that they actually have timely, accurate, relevant, and high-quality information to sell.
It was widely assumed in the first heady days of Web 2.0 that the digital revolution would spawn a global auto-correct facility; that falsehood would be driven out by the defence mechanism of e-accountability. Instead, it has sometimes seemed that the Internet is governed by an epistemological version of Gresham’s Law; namely that bad money drives out good.
One implicit assumption about communication is that it does in fact contain social information. One way to think about this “social information” is how much human intentionality was involved in the process that led your senses to receive that communication; that is, how much human time, thought or effort it required.
The ratio of intentionality contained at different stages in this process has shifted with technology. If the communication is face-to-face, there’s direct observation of the intentionality of the interlocutor, and structurally, there’s social information encoded in the fact that you’re interacting in the first place.
GPT-3 is a communication revolution that threatens to eliminate the possibility of information about the original human intentionality behind a given text post.
… Servers running GPT-3, appropriately trained on a specific text domain (like Reddit’s Am I the Asshole forum) could trivially double or triple the total number of words uploaded to that forum. Tens of thousands of humans would be immediately become victims of what I call “information fraud”: consuming signals that purport to contain information about human intentionality when that information is in fact absent.
… We live in an information economy; there are more possible applications for GPT-3 powered information fraud than I can imagine. But many of our systems rely on “information stability” in our signals; the specter of “informational hyper-inflation” is haunting the web.
“liar’s dividend”
… this concept is known as the Cantillon effect. For instance, when the US government passes multi-trillion-dollar bailouts, this seems to juice the stock market before anything else. If your cost of living increases before you get those newly printed dollars, your economic power has been transferred to whoever owns the stocks.
Much less understood is that there also exists a verbal Cantillon effect. When little white lies get added to the body of official knowledge, they first juice the stock price of the public intellectuals peddling them. If a professor publishes a book with a top New York publisher promoting a novel but utterly fantastic claim about how the world works, it juices the stock price of the author, their university, and their publisher. And why shouldn’t it? The author has made a novel discovery, which nobody understood until now. Most people have no way of vetting such claims directly, so it’s perfectly rational for most people to increase their respect for, and allocate money to, all of the players involved in producing this discovery. Especially if it has profound moral significance, like the author has discovered a systematic but invisible social ill and they know how to fix it.
If you’re sociologically close to the printers of official knowledge, little white lies are profitable. But if you’re a normal person trying to figure out how the world works, you’re screwed. By the time normal people hear that the discovery was false, they’ve been living for years according to a false mental model. Maybe society fully updates and the false idea is washed away forever, just as the effect of money printing eventually washes out with a rise in prices and wages across the board, but individuals close to the printers profited in the short term, and most people paid a cost.
An ‘economic crash’ caused by a collapse in the value of truth, implying that we were previously living in a ‘truth bubble’ in which truth was overvalued.
You get a show or a movie you’re really dying to watch, and you end up staying up late at night, so we actually compete with sleep. And we’re winning!
… as an attitudinal foundation for relating to society and technology, Waldenponding is, I am convinced, a terrible philosophy at both a personal and collective level. It’s a world-and-life negation. A kind of selfish free-riding/tragedy of the commons: not learning to handle your share of the increased attention-management load required to keep the Global Social Computer in the Cloud (GSCITC) running effectively.
Those who manipulate public opinion are ‘producers’ of influence, those who want to manipulate public opinion are ‘consumers’ of influence.
Implied Interventions- Treat ‘degree of personalisation’ in the information environment as an ‘interest rate’ that increases or decreases the cost of influence and polarisation. This might make sense in the context of the middleware proposal. The personalisation rate would be set by a central institution, akin to a central bank. The personalisation rates on particular recommender systems might be allowed to float around the official rate, but not vary too far.
“manufacturing consent”
Environmental
Ideas exist as ‘lifeforms’ in some environment and are subject to evolutionary pressures. Nadia Eghbal puts forward a darker version of the metaphor, in which parasitic ideas are the primary agents in competition, using humans as host organisms.
Full Fact propose classifying information incidents like natural disasters.
Criticisms- Nature is violent, and if you think public discourse is like nature then you are likely to either oppose discuss or condone immoral discourse as inevitable.
- Important human distinctions (such as those between mindlessness and creativity, determinism and choice, right and wrong) do not exist in nature.
- The respective mechanisms of transmission, variation and selection for memes and genes are very different.
- There is no close informational analogue of a species, organism, cell, or of sexual or asexual reproduction.
- Public Health
Although Marx and the fascists assumed false theories of biological evolution, it is no accident that analogies between society and the biosphere are often associated with grim visions of society: the biosphere is a grim place. It is rife with plunder, deceit, conquest, enslavement, starvation and extermination. Hence those who think that cultural evolution is like that end up either opposing it (advocating a static society) or condoning that kind of immoral behaviour as necessary or inevitable.
Arguments by analogy are fallacies. Almost any analogy between any two things contains some grain of truth, but one cannot tell what that is until one has an independent explanation for what is analogous to what, and why. The main danger in the biosphere–culture analogy is that it encourages one to conceive of the human condition in a reductionist way that obliterates the high-level distinctions that are essential for understanding it – such as those between mindless and creative, determinism and choice, right and wrong. Such distinctions are meaningless at the level of biology. Indeed, the analogy is often drawn for the very purpose of debunking the common-sense idea of human beings as causal agents with the ability to make moral choices and to create new knowledge for themselves.
As I shall explain, although biological and cultural evolution are described by the same underlying theory, the mechanisms of transmission, variation and selection are all very different. That makes the resulting ‘natural histories’ different too. There is no close cultural analogue of a species, or of an organism, or a cell, or of sexual or asexual reproduction. Genes and memes are about as different as can be at the level of mechanisms, and of outcomes; they are similar only at the lowest level of explanation, where they are both replicators that embody knowledge and are therefore conditioned by the same fundamental principles that determine the conditions under which knowledge can or cannot be preserved, can or cannot improve.
An idea seen sometimes is that cultures undergo selection & evolution, and as such, are made up of adaptive beliefs/practices/institutions, which no individual understands (such as farming practices optimally tailored to local conditions); even apparently highly irrational & wasteful traditional practices may actually be an adaptive evolved response, which is optimal in some sense we as yet do not appreciate (sometimes linked to “Chesterton’s fence” as an argument for status quo-ism).
This is not a ridiculous position, since occasionally certain traditional practices have been vindicated by scientific investigation, but the lenses of multilevel selection as defined by the Price equation shows there are serious quantitative issues with this: cultures or groups are rarely driven extinct, with most large-scale ones persisting for millennia; such ‘natural selection’ on the group-level is only tenuously linked to the many thousands of distinct practices & beliefs that make up these cultures; and these cultures mutate rapidly as fads and visions and stories and neighboring cultures and new technologies all change over time (compare the consistency of folk magic/medicine over even small geographic regions, or in the same place over several centuries). …
So—just like corporations—‘selection’ of cultures happens rarely with each ‘generation’ spanning centuries or millennia, typically has little to do with how reality-based their beliefs tend to be (for a selection coefficient approaching zero), and if one culture did in fact consume another one thanks to more useful beliefs about some herb, it is likely to backslide under the bombardment of memetic mutation (so any selection is spent just purging mutations, creating a mutation-selection balance); under such conditions, there will be little long-term ‘evolution’ towards higher optima, and the information content of culture will be minimal and closely constrained to only the most universal, high-fitness-impact, and memetically-robust aspects.
Ideas are genes, or the organisms that carry them.
Idioms“fit fiction” / “meme” / “infodemiology” / “going viral”
CriticismsMany ideas are not like viruses in that if you are exposed, you start believing in them. News items have something like this character (because they are new), but political opinions multisided and the direction of the flow of influence is not as simple. This is the distinction between models of opinion dynamics and models of social spreading phenomena.
Autopsy of a metaphor: The origins, use and blind spots of the ‘infodemic’ (2021) by Felix Simon & Chico Camargo.
- What counts as a pathogen? A claim, a story, or a collection of claims or stories? All information, or only ‘bad’ information?
- Diseases are usually not shared intentionally, but information sharing is usually intentional.
- Implies passive audiences who become ‘infected’ with information against their will.
- Unlike a pandemic, there is no single root cause behind the spread of (mis)information.
- Imply that (mis)information can be controlled by something akin to public health measures.
Toxoplasma is a neat little parasite that is implicated in a couple of human diseases including schizophrenia. Its life cycle goes like this: it starts in a cat. The cat poops it out. The poop and the toxoplasma get in the water supply, where they are consumed by some other animal, often a rat. The toxoplasma morphs into a rat-compatible form and starts reproducing. Once it has strength in numbers, it hijacks the rat’s brain, convincing the rat to hang out conspicuously in areas where cats can eat it. After a cat eats the rat, the toxoplasma morphs back into its cat compatible form and reproduces some more. Finally, it gets pooped back out by the cat, completing the cycle.
What would it mean for a meme to have a life cycle as complicated as toxoplasma?
Consider the war on terror. They say that every time the United States bombs Pakistan or Afghanistan or somewhere, all we’re doing is radicalizing the young people there and making more terrorists. Those terrorists then go on to kill Americans, which makes Americans get very angry and call for more bombing of Pakistan and Afghanistan.
Taken as a meme, it’s a single parasite with two hosts and two forms. In an Afghan host, it appears in a form called ‘jihad’, and hijacks its host into killing himself in order to spread it to its second, American host. In the American host it morphs in a form called ‘the war on terror’, and it hijacks the Americans into giving their own lives (and tax dollars) to spread it back to its Afghan host in the form of bombs.
From the human point of view, jihad and the War on Terror are opposing forces. From the memetic point of view, they’re as complementary as caterpillars and butterflies. Instead of judging, we just note that somehow we accidentally created a replicator, and replicators are going to replicate until something makes them stop.
… the notion that scholarly ideas and findings can escape the nuanced, cautious world of the academic seminar and transform into new forms, even becoming threats, becomes more of a compelling metaphor if you think of academics as professional crafters of ideas intended to survive in a hostile environment. … the escape of a faulty idea could have—and has had—dangerous consequences for the world.
Academic settings provide an evolutionarily challenging environment in which ideas adapt to survive. The process of developing and testing academic theories provides metaphorical gain-of-function accelerations of these dynamics. To survive peer review, an idea has to be extremely lucky or, more likely, crafted to evade the antibodies of academia (reviewers’ objections). By that point, an idea is either so clunky it cannot survive on its own—or it is optimized to thrive in a less hostile environment.
Think tanks and magazines like the Atlantic (or Foreign Policy) serve as metaphorical wet markets where wild ideas are introduced into new and vulnerable populations. Although some authors lament a putative decline of social science’s influence, the spread of formerly academic ideas like intersectionality and the use of quantitative social science to reshape electioneering suggest that ideas not only move from the academy but can flourish once transplanted. This is hardly new: Terms from disciplines including psychoanalysis (“ego”), evolution (“survival of the fittest”), and economics (the “free market” and Marxism both) have escaped from the confines of academic work before.
This metaphor is made explicit when reinforcement learning based recommender systems interact with humans.
There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging.
Such as a ‘parasite’.
Idioms“infodemic”
Together, these innovations have done nothing less than transform our way of living, learning, and relating to one another. But they have always had natural enemies. One, an ancient parasite, has recently mutated into something like an epistemic super-virus.
Rather than being static, [theories like QAnon] are more like viruses that constantly adapt and reconfigure themselves in order to persist and spread more rampantly. The supporters of these movements actively look for messaging that allows them to escape policy violations; often while doing so, they land on softer and more moderate ways to frame their ideology—and in the long run, this allows them to reach and convince an even wider audience. It’s almost like a bacterial infection that becomes more insidious and difficult to treat after resisting a partial course of antibiotics.
See Sander van der Linden and Stephan Lewandowsky’s work on psychological innoculation.
To review, there’s a general idea that strong (social) selection on a characteristic imperfectly correlated with some other metric of goodness can be bad for that metric, where weak (social) selection on that characteristic was good. If you press scientists a little for publishable work, they might do science that’s of greater interest to others. If you select very harshly on publication records, the academics spend all their time worrying about publishing and real science falls by the wayside. … A hypothesis I find plausible is that the Internet, and maybe television before it, selected much more harshly from a much wider field of memes; and also allowed tailoring content more narrowly to narrower audiences. The Internet is making it possible for ideas that are optimized to appeal hedonically-virally within a filter bubble to outcompete ideas that have been even slightly optimized for anything else. We’re looking at a collapse of reference to expertise because deferring to expertise costs a couple of hedons compared to being told that all your intuitions are perfectly right, and at the harsh selective frontier there’s no room for that. We’re looking at a collapse of interaction between bubbles because there used to be just a few newspapers serving all the bubbles; and now that the bubbles have separated there’s little incentive to show people how to be fair in their judgment of ideas for other bubbles, it’s not the most appealing Tumblr content. Print magazines in the 1950s were hardly perfect, but they could get away with sometimes presenting complicated issues as complicated, because there weren’t a hundred blogs saying otherwise and stealing their clicks. Or at least, that’s the hypothesis.
I would like to coin a ‘‘Malthus’s law’’ of information. Recall that Malthus noted that number of stomaches grew geometrically but the amount of food grew linearly. Pool (1984) noted that the supply of information (in virtually every medium) grows exponentionally whereas the amount that is consumed grows at best linearly. This is ultimately due to the fact that our mental powers and time available to process information is constrained. This has the uncomfortable consequence that the fraction of the information produced that is actually consumed is asymptoting towards zero.
I can’t help but liken the angry collective tweet storms to watching a flood erode a landscape with no ground-cover plants to slow it down. The natural processes of context and attention are lost.
“Polarisation is a mental health issue that results from collective trauma and threat perception.” See: the Collective Psychology Project, introduced by Alex Evans here. Also, more on psychological perspective of polarisation.
… In both studies, daily political events consistently evoked negative emotions, which corresponded to worse psychological and physical well-being, but also increased motivation to take political action (e.g., volunteer, protest) aimed at changing the political systemthat evoked these emotions in the first place. Understandably, people frequently tried to regulate their politics-induced emotions; and regulating these emotions using effective cognitive strategies (reappraisal and distraction) predicted greater well-being, but also weaker motivation to take action. Although people can protect themselves from the emotional impact of politics, frequently-used regulation strategies appear to come with a trade-off between well-being and action.
“information space” / “information environment” / “consume content”
Criticisms- A large part of the ‘information environment’ is online, which differs significantly from intuitions developed in ancestral environments.
- Environmental Protection
- Urban Planning
- Public Health
First, let’s take a step back: if social media is a new city, why is it so hard to govern? Why don’t real cities see millions of citizens fall into cults in a manner of months? How can they have conferences without (Gamergate-scale) harassment, or clubs that don’t turn people into propaganda-spewing automatons? Why don’t they have waves of Nazi recruitment? What does the physical city have that the virtual one doesn’t?
Physics. That is, physical limits.
As a society, we’ve evolved a combination of rules, norms, and design patterns that work, more or less, to rein in some kinds of terrible behavior. Those rules assume that we haven’t developed superpowers. Online, however, people do indeed have powers like cloning (bot armies), teleportation (ability to post in many places simultaneously), disguise (sock puppets), and so on. In a physical city, any single propagandist is limited by vocal stamina or wallet capacity. In the online city, the same person can post in 400 groups (of tens of thousands of people) each hour, for free. In a physical city, assuming a new identity involves makeup, forged documents, and lots of hard work. In the city of social media, it requires a two-minute signup process to make a new account. The physical city is populated by human beings. In the city of social media, you could be talking at any time to someone who is secretly a robot. In a physical city, travel takes time. In the city of social media, it’s trivial for Macedonian teenagers to assume the identities of thousands of people in a different hemisphere.
See: Future Crunch.
Genetic mechanisms designed for survival in prehistoric times can be maladaptive in the modern day. Thus, our genetic mechanisms for storing and using fat, for example, evolved in times when doing this was essential for our survival. But these mechanisms no longer serve our survival needs in a modern technological society, where there is a McDonald’s on every other corner. The logic of markets will guarantee that exercising a preference for fat-laden fast food will invariably be convenient because such preferences are universal and cheap to satisfy. Markets accentuate the convenience of satisfying uncritiqued first-order preferences, and they will do exactly the same thing with our preferences for memes consistent with beliefs that we already have—make them cheap and easily attainable. For example, the business model of Fox News (targeting a niche meme-market) has spread to other media outlets on both the Right and the Left (e.g., CNN, Breitbart, the Huffington Post, the Daily Caller, the New York Times, the Washington Examiner). This trend has accelerated since the 2016 presidential election in the United States.
In short, just as we are gorging on fat-laden food that is not good for us because our bodies were built by genes with a selfish replicator survival logic, so we are gorging on memes that fit our resident beliefs because cultural replicators have a similar survival logic. And just as our overconsumption of fat-laden fast foods has led to an obesity epidemic, so our overconsumption of congenial memes has made us memetically obese as well. One set of replicators has led us to a medical crisis. The other set has led us to a crisis of the public communication commons whereby we cannot converge on the truth because we have too many convictions that drive myside bias. And we have too many mysided convictions because there is too much coherence to our belief networks due to self-replicating memeplexes rejecting memes unlike themselves.
… like convenience food companies, the social media industry has seduced us by concocting a product that takes advantage of characteristics and inclinations that we possess because they helped our ancestors survive and reproduce. In the case of caloric needs, our brains orient us toward seeking lots of calories, fats, and sugars. In the case of social needs, our brains orient us toward vigilantly monitoring the social world and maintaining good relational and moral standing within our close networks and broader tribes. This modern social diet, however, undermines healthy socializing and contributes to growing social problems of loneliness, social disengagement, and tribal conflict just as a fast or junk food diet undermines healthy eating and contributes to a range of health problems.
A juvenile critic recently said that television was “chewing gum for the eyes.” In the late nineteenth century, a bitter critic called cheap novels “the chewing gum of literature, offering neither savor nor nutriment, only subserving the mechanical process of mastication.” But chewing gum (an American invention and an American expression) itself may have a symbolic significance. We might say now that chewing gum is the television of the mouth. There is no danger so long as we do not think that by chewing gum we are getting nourishment. But the Graphic Revolution has offered us the means of making all experience a form of mental chewing gum, which can be continually sweetened to give us the illusion that we are being nourished.
News is to the mind what sugar is to the body.
Such as ‘pollution’ or ‘wildfire’.
Criticisms- Emergency frames can be ‘emotionally draining and create exhaustion, anxiety, guilt and fear.’ They also empower different groups unevenly, and may create help authorities justify taking undesirable actions. See The political effects of emergency frames in sustainability (2021) by Patterson et al.
- Regulation to improve the quality of information is environmental protection regulation.
The influence of factious leaders may kindle a flame within their particular States, but will be unable to spread a general conflagration through the other States. A religious sect may degenerate into a political faction in a part of the Confederacy; but the variety of sects dispersed over the entire face of it must secure the national councils against any danger from that source. A rage for paper money, for an abolition of debts, for an equal division of property, or for any other improper or wicked project, will be less apt to pervade the whole body of the Union than a particular member of it; in the same proportion as such a malady is more likely to taint a particular county or district, than an entire State.
Finally, getting back to the hurricane/climate change metaphor, one can add two remarks. The first is that fighting a hurricane is supposed to be easier than fighting climate change but, as a matter of fact, it is not. While being different in their objectives, style, and methods, Russia is no less a challenge than China. The second is that climate change makes hurricanes more likely, which raises the question of Russia-China interactions on information warfare: To what extent are they emulating, copying each other, or even exchanging good practices, from the Donbass to Hong Kong?
Perhaps the most apt historical model for algorithmic regulation is not trust-busting, but environmental protection. To improve the ecology around a river, it isn’t enough to simply regulate companies’ pollution. Nor will it help to just break up the polluting companies. You need to think about how the river is used by citizens—what sort of residential buildings are constructed along the banks, what is transported up and down the river—and the fish that swim in the water. Fishermen, yachtsmen, ecologists, property developers, and area residents all need a say. Apply that metaphor to the online world: Politicians, citizen-scientists, activists, and ordinary people will all have to work together to co-govern a technology whose impact is dependent on everyone’s behavior, and that will be as integral to our lives and our economies as rivers once were to the emergence of early civilizations.
Martial
See: this by Renee DiResta.
Idioms“contest of ideas” / “information warfare” / “cognitive warfare” / “cognitive security” / “epistemic security” / “culture war”
CriticismsStephen Flusberg, Teenie Matlock & Paul Thibodeau, War metaphors in public discourse (2018)
- Evokes a sense of fear.
- Some work has found that ‘violent metaphors can influence views towards political violence, particularly in individuals with aggressive traits’ (Kalmoe, 2014).
- Reduces people to the battlefields on which the war is being fought, a type of dehumanisation.
- Promotes an adversarial mindset.
- Establishes an expectation that the ‘war’ will, at some point, end, either in victory or defeat.
- The prevalence of conflict related idioms for disagreement in the English language doesn’t necessarily imply a metaphor for war. They are equally applicable to less adversarial concepts like board games or sports.
War constitutes an emergency frame. Emergency frames can be ‘emotionally draining and create exhaustion, anxiety, guilt and fear.’ They also empower different groups unevenly, and may create help authorities justify taking undesirable actions. See The political effects of emergency frames in sustainability (2021) by Patterson et al.
See also: Iain King, Towards an Information Warfare Theory of Victory (2020).
If you aren’t a combatant, you are the territory.
World War III is a guerrilla information war with no division between military and civilian participation.
… I think we need to get over this war metaphor, this war on science metaphor. … the epistemic security angle has its benefits, but one of the issues with it is that once we start talking about security or threats to security … we step outside of the realm of normal political discourse and we start talking about things like executive powers, emergency powers, trade-offs between freedom and security (it has to be one or the other). And I think that we’re not there yet and I don’t think we should jump straight to that, we [should] definitely take a step back and think about … more holistic, systemic approaches.
… the metaphor of reasoning as a kind of defensive combat is baked right into the English language, so much so that it’s difficult to speak about reasoning at all without using militaristic language.
We talk about our beliefs as if they’re military positions, or even fortresses, built to resist attack. Beliefs can be deep-rooted, well-grounded, built on fact, and backed up by arguments. They rest on solid foundations. We might hold a firm conviction or a strong opinion, be secure in our beliefs or have unshakeable faith in something.
Arguments are either forms of attack or forms of defense. If we’re not careful, someone might poke holes in our logic or shoot down our ideas. We might encounter a knock-down argument against something we believe. Our positions might get challenged, destroyed, undermined, or weakened. So we look for evidence to support, bolster, or buttress our position. Over time, our views become reinforced, fortified, and cemented. And we become entrenched in our beliefs, like soldiers holed up in a trench, safe from the enemy’s volleys.
And if we do change our minds? That’s surrender. If a fact is inescapable, we might admit, grant, or allow it, as if we’re letting it inside our walls. If we realize our position is indefensible, we might abandon it, give it up, or concede a point, as if we’re ceding ground in a battle.
What would it look like if there were medics on the battlefield of the information wars?
No matter where you live, if you are on the Internet you are in a war zone. You may not feel the terror of bombs dropping around you, nor the horrors of physical violence, but there is war nonetheless. For as long as humans have fought, military tactics have always involved propaganda, information manipulation, and deception. Today, digital communication technologies have changed the landscape of what the U.S. military calls “irregular warfare.” As opposed to “conventional warfare,” this kind of war is not primarily about the use of physical force, and it is not primarily about targeting an adversary’s military assets. Irregular warfare includes economic warfare (sanctions), cyber warfare (attacks within the digital domain), and political warfare (diplomacy), but it most pervasively manifests as some version of population-centric information and narrative warfare. The goal is the same as conventional war: to acquire by coercive means some political or strategic good, such as economic, geographical, or military advantage. In this kind of war, the mind of every citizen is a potential target, and tactical victories could include a delegitimized election, implosions of civic discourse, and mutual hatred sparked between fellow citizens.
Just as the destructive power of nuclear weapons forced humanity to reorient to the idea that mutually assured destruction exists at the extremes of physical violence, so advances in information warfare require us to face the same truth of inevitable self-destruction, and to mutually back away.
… as an attitudinal foundation for relating to society and technology, Waldenponding is, I am convinced, a terrible philosophy at both a personal and collective level. It’s a world-and-life negation. A kind of selfish free-riding/tragedy of the commons: not learning to handle your share of the increased attention-management load required to keep the Global Social Computer in the Cloud (GSCITC) running effectively.
And so it is that seduction and persusion are similar. Both involve surrender, but it is not surrender to another person, for there would be no dignity in that. Rather, when I am seduced I surrender to love, and when I am persuaded I surrender to truth.
Social
As the authors write, “Increased political diversity would improve social psychological science by reducing the impact of bias mechanisms such as confirmation bias, and by empowering dissenting minorities to improve the quality of the majority’s thinking.”
One of the study’s authors, Philip E. Tetlock of the University of Pennsylvania, put it to me more bluntly. Expecting trustworthy results on politically charged topics from an “ideologically incestuous community,” he explained, is “downright delusional.”
And so it is that seduction and persusion are similar. Both involve surrender, but it is not surrender to another person, for there would be no dignity in that. Rather, when I am seduced I surrender to love, and when I am persuaded I surrender to truth.
“on the one hand …, on the other hand …”
i like to think that the gestures we use in everyday speech are like proto dance moves that don’t have the space or freedom or permission to be fully expressed. they‘re a watered-down throwback to a time when we talked with our whole entire bodies
Imagine a culture where an argument is viewed as a dance, the participants are seen as performers, and the goal is to perform in a balanced and aesthetically pleasing way. In such a culture, people would view arguments differently, experience them differently, carry them out differently, and talk about them differently. But we would probably not view them as arguing at all: they would simply be doing something different.
… Suppose we could capture and quantify three streams in particular: information that is new (created within the past month), middle-aged (created 10 to 50 years ago, by the generations that include the child’s parents and grandparents), and old (created more than 100 years ago).
Whatever the balance of these categories was in the 18th century, the balance in the 20th century surely shifted toward the new as radios and television sets became common in American homes. And that shift almost certainly became still more pronounced, and quickly so, in the 21st century. When the majority of Americans began using social media regularly, around 2012, they hyper-connected themselves to one another in a way that massively increased their consumption of new information—entertainment such as cat videos and celebrity gossip, yes, but also daily or hourly political outrages and hot takes on current events—while reducing the share of older information. What might the effect of that shift be?
Religious
Ideologies are religions and political parties are organised religions. Political activity is religious activity.
Idioms“preaching to the converted” / “preaching to the choir”
Absolutely unmixed attention is prayer.
Groups of people, outraged by some real or imagined transgression, often respond in a way that is wildly disproportionate to the occasion, thus ruining the transgressor’s day, month, year, or life. To capture that phenomenon, we might repurpose an old word: lapidation. Technically, the word is a synonym for stoning, but it sounds much less violent. It is also obscure, which makes it easier to enlist for contemporary purposes. Lapidation plays a role in affirming, and helping to constitute, tribal identity. It typically occurs when a transgressor is taken to have violated a taboo, which helps account for the different people and events that trigger left-of-center and right-of-center lapidation. One of the problems with lapidation is that it often accomplishes little; it expresses outrage, and allows people to signal their identity, but does no more.
[Media manipulators] believe that being digitally crucified is to their cause’s long-term advantage, especially if media outlets will cover their digital execution.
… Illich invites us to consider what it might mean to discipline our vision, and I’m inviting us to consider whether this is not a better way of framing our relationship to the digital media ecosystem. The upshot is recognizing the additional dimensions of what is often framed as a merely intellectual problem and thus met with laughably inadequate techniques. Perceptual askesis would involve our body, our affections, our desires, and our moral character as well as our intellect.
The first step would be to recognize that vision is, in fact, subject to training, that it is more than the passive reception of sensory data. In fact, of course, our vision is always being disciplined. Either it happens unwittingly as a function of our involvement with the existing cultural structures and patterns, or we assume a measure of agency over the process. Illich’s historical work, then, denaturalizes vision in order to awaken us to the possibility of training our eyes. Our task, then, would be to cultivate an ethos of seeing or new habits of vision ordered toward the good. And, while the focus here has fallen on sight, Illich knew and we should remember that all the senses can be likewise trained.
Mathematical
The claim that public discourse can be formally modelled as a (massive) stochastic process.
Of course, there are many modelling paradigms that could be used. Options include voter models, agent-based models and models from the field of opinion dynamics.
All individuals occupy a point in the space, the coordinates of which describe their opinions, worldview, and information consumption habits.
I originally saw this on Twitter but I can’t find the tweet. The claim was that certain geometric properties of high-dimensional space fit with our intuition for how the space of public opinion operates. In particular, it is possible for two points in high-dimensional space to be ‘close’ along any one dimension, but that these small differences will accumulate and mean that the Euclidean distrance between the two points is very large. Likewise, in the space of possible worldviews, two individuals may have very similar opinions on the vast majority of topics, and very similar information consumption habits, but that any small differences accumulate and the two worldviews overall are incompatible.
Thank you to Peter Pomerantsev for a conversation on this topic.
This site is licensed under the Creative Commons public domain (CC-0) license, but attribution is appreciated. Views expressed are my own, and probably don't coincide with those of any entity with which I am affiliated.