- Other Life
- Posts
- Deleuze, Cybernetics, Evolution, Academics
Deleuze, Cybernetics, Evolution, Academics
Such a strange little text, this 'Postscript on Control Societies.'... The complaint is articulated in terms of control, communication, and the 'harshest confinement' wrought by 'the new monster' of information society... So why not call Deleuze's adversary by its true name: the enemy is cybernetics...
I find this intriguing because I've never thought this at all. As the Postscript suggests, contemporary societies operate through cybernetic control processes, i.e. distributed feedback processes. Today, political oppression is cybernetic, in this sense. But in the Deleuzo-Guattarian perspective, as far as I can see, liberation will also be cybernetic. As Erinaceous points out on /r/CriticalTheory:
Guattari loved cybernetics. He was heavily influenced by Humberto Maturana and Francisco Varela who were second generation cyberneticians. [A Thousand Plateaus] is also loaded with references to early Chaos Theory which comes out of cybernetics... What bothered both Deleuze and Guattari was the idea of centralization and control...
So which is it? Here is a kind of meta-theory, which I think clarifies the Deleuzo-Guattarian perspective on cybernetics and also why people will disagree about it.
At the root of the confusion here is that theoretical models of empirical reality do not have normative charges, unless you subscribe to a strong version of social constructivism. If you're a social constructivist — if you believe that objective reality is downstream of language — then holding and abiding by a theory can be good or bad. Whether it's accurate or useful hardly matters, because it's the theory-holding that determines reality.
I do not know Galloway's work very well, but on this point, we can see that he is a strong social constructivist, simply because he thinks a theory (cybernetics) can be bad (an enemy).
I reject this view. In my view, objective realities exist outside of language, and human projects succeed only to the degree they abide by reality (though our projects can change reality, they can only do so if they abide by it). Therefore, empirical and normative "goodness" are perfectly aligned, necessarily. To the degree a theoretical model accurately fits the data of the world, it is good. That's that. If you would like to foment collective liberation, your only chance is to embrace the truest possible theories of reality, more radically than status quo institutions embrace them, and act on them with more fidelity than status quo institutions act on them. This is the vision of revolutionary politics held here at Other Life, and chief among our teachers were Deleuze and Guattari.
Deleuze and Guattari were not social constructivists in the way that has become fashionable since the 1990s. This is the reason Galloway's take feels off, and why so much Deleuze scholarship feels like it's from a different planet than the one Deleuze inhabited: Deleuze did not subscribe to a strong social constructivism, but most academic theorists today do, whether it be with deep personal sincerity or merely out of social/disciplinary necessity.
If cybernetics provides a useful model of empirical realities, then state-of-the-art political regimes will rule their subjects in a fashion consistent with its principles. Any successful project of liberation would use tactics equally consistent with its principles, if not more so. Cybernetics cannot be an enemy, unless you think it's bad to be right, and you actually have no interest or incentives to make a revolutionary project succeed — and here we see the problem. If you're an academic theorist in the humanities today, you generally think that the will-to-be-correct is an ethically dubious drive to dominate. It is now essentially the raison d'être of humanities academics to raise normative objections to the truest available theories. (All theories are false, technically, but "true" here just means optimally consistent with the data.) Reality is brutal, therefore the truest theories are the most brutal, therefore the highest-status work in the humanities will be that which makes the truest theories look as ugly as possible.
Evolution is another example. Traditional Christians once seemed stupid and backward for their horrified opposition to the implications of evolutionary theory. Today, academics in the humanities seem smart and sophisticated for their horrified opposition to the implications of evolutionary theory. Evolutionary psych is sexist and racist, machine learning and AI are sexist and racist, everything that works becomes an enemy.
Cybernetics and evolution name basic principles of reality, and they help to explain our oppression as well as our flourishing. These concepts help to explain why capitalism is so hard to overthrow, but they also explain how we heat our homes (the thermostat being a classic textbook example of a cybernetic device). Humans flourish through technoscience as intelligence instantiated, and we try politically to contain the anti-social implications of technoscientific reality-penetration, but capitalism is what happens when intelligence escapes its last political box and starts replicating until we eventually become the objects of its manipulation. We started with the idea that we’d buy and sell things to advance our interests, leveraging the cybernetic price system like we leverage the thermostat to keep our house’s temperature in equilibrium. Before we knew it, the price system evolved new types of people that better suited its interests, and now we are so many thermostats in the service of capitalism.
There is still, in principle, the possibility of generating systemic liberation dynamics via cyberpositive tactics. The big questions of the late 21st century, however, will be: Can the human desire for liberation dynamics beyond capitalist exploitation pass the empirical bottleneck of intelligence takeoff, given the brutally unforgiving requirements involved, and can the intelligent pass the bottleneck of destructive hordes who fear they cannot pass the bottleneck of intelligence takeoff?