The “Effective Altruism” (known as “EA”) movement, which grew in popularity from the early 2000’s until the cryptocurrency scammer and EA-promoter Sam Bankman-Fried’s blowup late last year, seeks “to use evidence and reason to figure out how to benefit others as much as possible.”
Some have proclaimed this evidence-based altruism movement dead—”I see EA fall like lightning”, one friend joked to me late last year. But I think the movement is very much alive. If anything, it will gain steam due to the emergence of generative A.I. and the emerging materialist philosophies of our day. It may have to undergone a serious re-branding, sure, but it’s not going anywhere. All the best heresies are relatively anti-fragile—they only grow stronger under attack—and EA will be like that.
Anyone who has sought philanthropy money in the past 10 years—and I am one of them—knows just how ridiculous most of those fundraising processes have become. And we have EA to thank for much of this. It has infiltrated even the mindsets of people who think they have nothing to do with EA. The incessant requests to “measure everything”—even highly qualitative things, like religious conversions—and find the right “metrics” for the “programs” to “have impact”. (That word…’impact’!)
The metaphysical fraudulence of EA still hasn’t been properly exposed. The key to understanding EA, in my view, is that it is a warped notion of “love”—or at least the appropriation of the word ‘love’ by so many people who are in the movement or adjacent to it. Even if they don’t say the word love, the implicit message is that what they are doing is more loving than those who have the nerve to actually claim to be acting out of love.
They are likely mistaken. If they just followed the data, they would see a better way to serve humanity.
Remember when Marc Andreessen claimed that A.I. tutors will be ‘the machine version of infinite love’? (““Every child will have an A.I. tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful,” he said.)
Andreessen is putting language to what many in the EA community really believe (I have talked to many of them in person, so I can verify this)—that the zenith of love is complete “disinterestedness”, which A.I. can of course do better than humans because it does not have human interests, a human heart, the human desire to be recognized or loved in return. This “disinterestedness” is the unspoken attraction of this ‘computer love’.
While he’s relatively unknown today, 17th century Bishop François Fénelon cemented the idea of ‘disinterested’ love in the popular imagination of his time—but his ideas live on very strongly, and Fénelon is more important than ever.
He is most famous for writing a work, in 1699, called Telemachus, ostensibly addressed to the son of the Sun King, Louis XIV. In it, Fénelon urges the young man to rule less selfishly than his father.
His core ideas centered around the various modes of self-transcendence. He contrasted amour-propre (self-love) with what he called amour pur—pure love—a kind of love completely free of any concern for the self.
It’s easy to see why those striving for perfection could fall into the conceit of thinking that they could achieve this kind of completely ‘disinterested’ love and shed their human interests in things like, say, wanting to be loved in return.
A.I. has allowed Fénelon’s dream of this disinterested ‘love’ to finally become a reality, more than three centuries later.
But what’s wrong with this idea? Many things. I am, following the philosopher Max Scheler (and his discipline, Von Hildebrand), a strong critic of almost anything that calls itself ‘altruism’—because what is almost always meant by altruism is the kind that Fénelon imagined: it is the conceit of love divorced from everything that makes it human, and from the human subject himself. One of my mentors, John Crosby, puts his objection like this:
“Thus altruism, understood as a certain extreme and unbalanced other-centeredness, while it poses as supreme love, in fact undermines love.”
One can imagine a man who falls deeply in love with a woman and says: “I love you, I am crazy about you, and I want only your good—and I don’t care at all whether you love me in return. If you loved me back, this wouldn’t be a source of happiness for me in the least! In fact, it might taint my love for you by making me selfish. All I care about is your good.” How bizarre that would be, and how awkward.
It takes the beautiful, common Italian expression Ti voglio bene (“I want your good”) and takes it to extremes, displacing the self completely and causing a person to lose themselves in a form of unhealthy “other-centeredness” that ends up depersonalizing them. Their sense of self-respect and self-love is completely eclipsed.