• Other Life
  • Posts
  • There are no humans on the internet

There are no humans on the internet

The best way to build community and make friends on the internet is to treat all internet interlocutors as if they are real humans in a real-life, local village. If you do this, over time many people will like you and want to form an alliance with you. Because most internet behavior is so atrocious, if you abide by traditional inter-personal norms (reciprocity, manners, courtesy, etc.), you quickly become a strange attractor. You become a kind of weird avatar from another time and place. Of course, you will encounter many haters in the short-run. They will interpret your quaint earnestness as an ironic performance, or “soy boy” pusillanimousness, or some kind of 4-dimensional hyper-grift. But in the long-run, traditional interpersonal ethics are irresistibly attractive because they are, in fact, good and superior.

Now, of course, there is a reason why average internet behavior is so atrocious.

It is seemingly impossible to abide by small-village norms on the internet, simply because those norms evolved in contexts where villagers had no choice but to play iterated games and everyone could remember everyone else’s behaviors. On the internet, neither of these conditions hold: nobody is forced to remain in any grouping over time, and there are so many people that nobody can remember everyone else’s behavior. There are strong incentives to exploit others, and no obvious reason to invest much care into others. So if you treat every potential interlocutor with care, you’ll quickly waste all of your resources and be exploited into nothingness.

However, it is feasible to apply traditional ethics to everyone who enters your personal sphere for the first time, and then simply ignore them as soon as they fail to reciprocate. In game theory this strategy is called “tit for tat,” and in my contexts it is found to be the best possible strategy. Many people seem to follow a variant of this strategy, in their “blocking” behavior. On Twitter, many people will block someone at the first indication of their enemy status. But most of these people are not really playing traditional-ethics tit-for-tat reciprocity because usually they’re usually also lobbing hand-grenades into the enemy camp for fun and profit on a daily basis. I’m saying one should treat the entire universe of internet denizens on a courteous, tit-for-tat basis: If they’ve done me no wrong, then I won’t do them any wrong. If they come into my sphere, I will treat them as a real friend until evidence of bad behavior, in which case I will not retaliate but simply ignore them.

Anyone who abides by this strategy will be surprised by how quickly a meaningful community emerges around them. This might seem obvious, even trite, but what’s not is how to scale this strategy. Most people who operate this strategy find themselves in relatively tiny clusters. And almost inevitably, they form their own imaginary out-groups and all the pitfalls of group-psychological bias emerge. What I’m really interested in is how to make this strategy scale, without limit or cessation.

I think I have figured out why this strategy is so hard to scale. The solution is hidden behind a deeply counter-intuitive paradox. It’s so counter-intuitive that it’s too psychologically difficult for most people to execute. But in certain ways I think I have been learning to do it, which is how I’ve become conscious of it.

The paradox is that to treat internet denizens humanely at scale, one must cultivate a brutal coldness toward all of the internet’s pseudo-human cues, which are typically visual (face pictures and text) applied to your sense organs by corporations for profit. These pseudo-human cues are systematically arranged, timed, conditioned, and differentially hidden or revealed to you by absolutely non-human, artificial intelligence.

Your goal should be to hack this inhuman system of cues on your screen, with a brutal analytical coldness, in order to find and extract humans into potential relationships. One must stop seeing the internet as “a place to connect with others,” but rather see it as nearly the opposite: It is a machine that stands almost impenetrably between and against humans, systematically exploiting our desire for connection into an accelerating divergence and alienation from each other. It is only when one genuinely cultivates this mental model, over time, that it becomes psychologically possible to treat one’s computer for what it is: An utterly inhuman device for conducting operations on statistical aggregates, a device which only accidentally comes pre-packaged with an endless barrage of anthropomorphic visual metaphors.

Those are not people “behind” the avatars on your screen, those are functions in a machine. When we speak of “the algorithms,” we generally imagine them as code behind apps, but the difficult fact to admit is that “the algorithms” are primarily other people, or at least those names and face-pictures we “interact with.” The codebase of the Facebook app doesn’t really manipulate me, the code is not “gaming” me, because I have no biological machinery that allows complicated lines of technical language to trigger changes in my behaviors. It is ultimately the creative energy of other human beings, uploaded to the machine, that is the driving force of what is manipulating me; the codebase only provides a set of game-rules through which other human beings are incentivized to apply their creative effort.

The horror of big social network platforms is not to be found in “technology” or “capitalism,” it is to be found in what we have become. Capitalism is only the name of that which aggregates from the raw reality of what we really want, of what we really do. The solution is to desire differently. Desire is amenable to updating and collective organizing, at least to a degree, which cannot be said of advanced capitalism.

We must get to work, with icy discipline, creating systems to extract humans from the machine, which means to produce human relationships from what we do have in abundance: data. Human relationships are no longer given to anyone by default, so if you want them you must produce them through engineered systems, or else pay someone who can engineer them for you.

As an aside, “independent content creators” are somewhat misleadingly named; perhaps they are primarily community engineers. Truly independent creative effort, which successfully differentiates itself from the passively extracted “creative effort” of social media sheeple, is like a lightning rod that organizes around itself other like-minded humans looking for an exit from the machine. But of course, the independent community is its own machine, and successful “content creators” are essentially disciplined entrepreneurs running often rather sophisticated systems.

We should seek to build independent systems that are even more aggressively inhuman than big social network platforms — because they hack desire with even more precision — but they should output relationships and experiences that are far more authentically human than anything else currently available. And they should be able to do this at scale. More artificial intelligence, more automation, more precisely optimized processes, but engineered by individuals and small-groups against, rather than for, the pseudo-human web.