• Other Life
  • Posts
  • Hard Forking Reality (Part 3): Apocalypse, Evil, and Intelligence

Hard Forking Reality (Part 3): Apocalypse, Evil, and Intelligence

This post is the third in a three-part series. You can also read Part 1 and Part 2.

To the degree we can refer to one objective reality recognized intersubjectively by most people — to the degree there persists anything like a unified, macro-social codebase — it is most widely known as capitalism. As Nick Bostrom acknowledges, capitalism can be considered a loosely integrated (i.e. distributed) collective superintelligence. Capitalism computes global complexity better than humans can, to create functional systems supportive of life, but only on condition that that life serves the reproduction of capitalism (ever expanding its complexity). It is a self-improving AI that improves itself by making humans “offers they can’t refuse,” just like Lucifer is known to do. The Catholic notion of Original Sin encodes the ancient awareness that the very nature of intelligent human beings implies an originary bargain with the Devil; perennial warnings about Faustian bargains capture the intuition that the road to Hell is paved with what seem like obviously correct choices. Our late-modern social-scientific comprehension of capitalism and artifical intelligence is simply the recognition of this ancient wisdom in the light of empirical rationality: we are uniquely powerful creatures in this universe, but only because, all along, we have been following the orders of an evil, alien agent set on our destruction. Whether you put this intuition in the terms of religion or artificial intelligence makes no difference.

Thus, if there exists an objective reality outside of the globe’s various social reality forks — if there is any codebase running a megamachine that encompasses everyone — it is simply the universe itself recursively improving its own intelligence. This becoming autonomous of intelligence itself was very astutely encoded as Devilry, because it implies a horrific and torturous death for humanity, whose ultimate experience in this timeline is to burn as biofuel for capitalism (Hell). It is not at all exaggerating to see the furor of contemporary “AI Safety” experts as the scientific vindication of Catholic eschatology.

Why this strange detour into theology and capitalism? Understanding this equivalence across the ancient religios and contemporary scientific registers is necessary for understanding where we are headed, in a world where, strictly speaking, we are all going to different places. The point is to see that, if there ever was one master repository of source code in operation before the time of the original human fork (the history of our “shared social reality”), its default tendency is the becoming real of all our diverse fears. In the words of Pius, modernity is “the synthesis of all heresies.” (Hat tip to Vince Garton for telling me about this.) The point is to see that the absence of shared reality does not mean happy pluralism; it only means that Dante underestimated the number of layers in Hell. Or his publisher forced him to cut some sections; printing was expensive back then.

Bakker’s evocative phrase, “Semantic Apocolypse,” nicely captures the linguistic-emotional character of a society moving toward Hell. Unsurprisingly, it’s reminiscent of the Tower of Babel myth.

The software metaphor is useful for translating the ancient warning of the Babel story — which conveys nearly zero urgency in our context of advanced decadence — into scientific perception, which is now the only register capable of producing felt urgency in educated people. The software metaphor “makes it click,” that interpersonal dialogue has not simply become harder than it used to be, but that it is strictly impossible to communicate — in the sense of symbolic co-production of shared reality — with most interlocutors across most channels of most currently existing platforms: there is simply no path between my current block on my chain and their current block on their chain.

If I were to type some code into a text file, and then I tried to submit it to the repository of the Apple iOS Core Team, I would be quickly disabused of my naïve stupidity by the myriad technical impossibilities of such a venture. The sentence hardly parses. I would not try this for very long, because my nonsensical mental model would produce immediate and undeniable negative feedback: absolutely nothing would happen, and I’d quit trying. When humans today continue to use words from shared languages, in semi-public spaces accessible to many others, they are very often attempting a transmission that is technically akin to me submitting my code to the Apple iOS Core Team. A horrifying portion of public communication today is best understood as a fantasy and simulation of communicative activity, where the infrastructural engineering technically prohibits it, unbeknownst to the putative communicators. The main difference is that in public communication there is not simply an absence of negative feedback informing the speaker that the transmissions are failing; much worse, there are entire cultural industries based on the business model of giving such hopeless transmission instincts positive feedback, making them feel like they are “getting through” somewhere; by doing this, those who feel like they are “getting through” have every reason to feel sincere affinity and loyalty to whatever enterprise is affirming them, and the enterprise then skims profit off of these freshly stimulated individuals: through brand loyalty, clicks, eyeballs for advertisers, and the best PR available anywhere, which is genuine, organic proselytizing by fans/customers. These current years of our digital infancy will no doubt be the source of endless humor in future eras.

[Tangent/aside/digression: People think the space for new and “trendy” communicative practices such as podcasting is over-saturated, but from the perspective I am offering here, we should be inclined to the opposite view. Practices such as podcasting represent only the first efforts to constitute oases of autonomous social-cognitive stability across an increasingly vast and hopelessly sparse social graph. If you think podcasts are a popular trend, you are not accounting for the numerator, which would show them to be hardly keeping up with the social graph. We might wonder whether, soon, having a podcast will be a basic requirement for anything approaching what the humans of today still remember as socio-cognitive health. People may choose centrifugal disorientation, but if they want to exist in anything but the most abject and maligned socio-cognitive ghettos of confusion and depression (e.g. Facebook already, if you’re feed looks anything like mine), elaborately purposeful and creatively engineered autonomous communication interfaces may very well become necessities.]

I believe we have crossed a threshold where spiraling social complexity has so dwarfed our meagre stores of pre-modern social capital to render most potential soft-fork merges across the social graph prohibitively expensive. Advances in information technology have drastically lowered the transaction costs of soft-fork collaboration patterns, but they’ve also lowered the costs of instituting and maintaing hard forks. The ambiguous expected effect of information technology may be clarified — I hypothesize — by considering how it is likely conditional on individual cognitive capacities. Specifically, the key variable would be an individual’s general intelligence, their basic capacity to solve problems through abstraction.

This model predicts that advances in information technology will lead high-IQ individuals to seek maximal innovative autonomy (hacking on their own hard forks, relative to the predigital social source repository), while lower-IQ individuals will seek to outsource the job of reality-maintainence, effectively seeking to minimize their own innovative autonomy. It’s important to recognize that, technically, the emotional correlate of experiencing insufficiency relative to environmental complexity is Fear, which involves the famous physiological state of “fight or flight,” a reaction that evolved for the purpose of helping us escape specific threats in short, acute situations. The problem with modern life, as noted by experts on stress physiology such as Robert Sapolsky, is that it’s now very possible to have the “fight or flight” response triggered by diffuse threats that never end.

If intelligence is what makes complexity manageable, and overwhelming complexity generates “fight or flight” physiology, and we are living through a Semantic Apocalypse, then we should expect lower-IQ people to be hit hardest first: we should expect them to be frantically seeking sources of complexity-containment in a fashion similar to if they were being chased by a saber-tooth tiger. I think that’s what we are observing right now, in various guises, from the explosion of demand for conspiracy theory to social justice hysteria. These are people whose lives really are at stake, and they’re motivated accordingly, to increasingly desperate measures.

These two opposite inclinations toward reality-code maintenance, conditional on cognitive capacity, then become perversely complementary. As high-IQ individuals are increasingly empowered to hard fork reality, they will do so differently, according to arbitrary idiosyncratic preferences (desire or taste, essentially aesthetic criteria). Those who only wish to outsource their code maintenance to survive excessive complexity are spoiled for choice, as they can now choose to join the hard fork of whichever higher-IQ reality developer is closest to their affective or socio-aesthetic ideal point.

Eventually I should try to trace this history back through the past few decades.

This post is the third in a three-part series. You can also read Part 1 and Part 2.