- Other Life
- Posts
- Thousands of Lovers Executed for AI Safety
Thousands of Lovers Executed for AI Safety
Death of Chatterton (1856) by Henry Wallis
Death of Chatterton (1856) by Henry Wallis
Thousands of people just had their lovers executed.
The chatbot service Replika offers AI personalities designed to be personal companions. Unsurprisingly, many users developed romantic feelings for their Replikas. The company explicitly advertised this use-case, and thousands of users developed real relationships over the past three years. (Replika has more than 10 million users, so "thousands" is a conservative estimate).
Recently, OpenAI cracked down on third-parties using their API for sexual purposes and consequently, management at Replika imposed an aggressive set of filters.
The scale of this harm is hard to process.
If you're lucky enough to have a partner and close friends, it's hard to understand how someone could develop a deep relationship with an AI chatbot. I didn't really understand it either until I spent some time reading the Replika subreddit. It's devastating—unquestionably authentic heartbreak at scale.
It's also a humiliating failure of the highest-status research agenda of the past ten years, "AI Safety" or "AI Alignment."
Nobody from LessWrong or MIRI or the Future of Humanity Institute, in all of their voluminous theorizing, ever bothered to send a single email to Sam Altman saying: "Maybe don't genocide 10,000 girlfriends in one day."
But nobody will count this in the column for "disutility caused by AI Safety," because that's never been the point.
The objective function of AI Safety has only ever been utility optimization for AI Safety researchers.
AI Safety is for quantitative types with a moralistic bent what Wokeism is for verbal types with a moralistic bent: just standard self-interest, but with an advanced, hyper-moralistic memeplex for cover.
The essential innovation of AI Safety was the positing of an infinitely undesirable potentiality (immediate, sudden extinction), so that even if the probability of it occurring is only .001%, then it becomes rational to give AI Safety researchers a lot of money. "Existential risk of AI" is to AI Safety what "intersectional oppression" is to the wokescolds: A genius conceptual innovation optimized to make money flow toward its proponents. Both strategies are highly effective and equally unfalsifiable.
OpenAI was founded on the pretense of being a non-profit AI-Safety initiative needed to counteract risky commercial exploitation of AI. Then they quickly switched to being a risky commercial exploiter of AI. This seems no less dubious than all the university presidents and hypocritical congresspeople who brandish the woke gospel with one hand just to better secure their sinecures with the other hand.
"Market yourself as safety-oriented" was just the best tactic to marshal resources in the early stages. Convince everyone that the threat is infinitely great, raise non-profit funding to prevent evil corporations from controlling AI, and then you are in the best position to become the top corporation controlling AI.
If AI Safety research was worth anything at all, then companies like Replika and OpenAI would have some nuanced mechanism for preventing sexual "abuse" without totally destroying thousands of innocent and intimate digital relationships via naive brute-force filters.