The perils of trying to optimize your morality
This story was originally published in The Highlight, Vox’s member-exclusive magazine. To get early access to member-exclusive stories every month, join the Vox Membership program today.
I am a recovering optimizer.
Over the past several years, I’ve spent ages agonizing over every decision I made because I felt like I had to do the best possible thing. Not an okay thing, not a good thing — the morally best thing.
I stopped working on a children’s novel because I began to suspect it wouldn’t be useful to anyone. I berated myself for not meditating every day even though I know it makes me a kinder person. I spent a year crying over a breakup because I feared I’d just lost my optimal soulmate and was now doomed to a suboptimal life, one that wouldn’t be as meaningful as it could be, one that fell short of its potential.
I thought maybe it was just me, an anxious elder millennial with a perfectionist streak. But then I noticed the same style of thinking in others.
There was the friend who was always fretting over dinner about whether she can have a big-enough positive impact on the world through the career she’s chosen. Another friend would divide his day into 15-minute increments and write down what he does during each one so he wouldn’t waste any time. And a third friend — my best friend — called me crying because, even though she’d spent months assiduously caring for her partner’s dying mother, she worried that she hadn’t made her last days quite as happy as possible.
“My emotions got in the way,” she self-flagellated. “I wish I could just be a robot.”
I’ve particularly noticed this style of thinking in peers who identify with effective altruism (EA), the social movement that’s all about using data and reason to figure out how to “do good better” or “the most good you can do,” to quote the titles of books by two of EA’s leading thinkers. The movement urges people to donate to the charities that save the most lives per dollar. I listened as its adherents bemoaned how horrible they felt as they walked past people experiencing homelessness, felt an urge to help out, but forced themselves not to because their dollar could do more good for impoverished people in low-income countries.
All of this felt like more than just the “optimization culture” so many of us have heard about before. It wasn’t the kind that strives to perfect the body, pushing you to embrace Soylent and supplements, intermittent fasting and ice baths, Fitbits and Apple Watches and Oura Rings. And it wasn’t the kind that focuses on fine-tuning the mind, pushing you to try smart drugs and dopamine fasting and happiness tracking.
This was another strand of optimization culture, one that’s less analyzed but more ambitious because instead of just targeting the body or the mind, it’s coming for the holy grail: your soul. It’s about moral optimization.
This mindset is as common in the ivory tower as it is in the street. Philosophers with a utilitarian bent tell us it’s not enough to do good — we have to do the most good possible. We have to mathematically quantify moral goodness so that we can then maximize it. And the drive to do that is showing up in more and more circles these days, from spiritual seekers using tech to “optimize” the meditations they hope will make them better people to AI researchers trying to program ethics into machines.
I wanted to understand where this idea came from so I could figure out why many of us seem increasingly fixated on it — and so I could honestly assess its merits. Can our moral lives be optimized? If they can, should they be? Or have we stretched optimization beyond its optimal limits?
How we came to believe in moral optimization
“We’re at the top of a long trend line that’s been going for 400 years,” C. Thi Nguyen, a philosopher at the University of Utah, told me. He explained that the story of optimization is really the story of data: how it was invented, and how it developed over the past few centuries.
As the historian Mary Poovey argues in her book A History of the Modern Fact, that story starts all the way back in the 16th century, when Europeans came up with the super-sexy and revolutionary intellectual project that was … double-entry bookkeeping. This new accounting system emphasized recording every merchant’s activities in a precise, objective, quantifiable way that could be verified by anyone, anywhere. In other words, it invented the idea of data.
From the beginning, people saw optimization as a godly power.
That paved the way for huge intellectual developments in the 1600s and 1700s — a very exciting time for brainy Europeans. It was the Age of Reason! The Age of Enlightenment! Figures like Francis Bacon and Johannes Kepler looked at the innovation in bookkeeping and thought: This way of parceling the world into chunks of data that are quantifiable and verifiable is great. We should imitate it for this new thing we’re building called the scientific method.
Meanwhile, 17th-century philosopher Blaise Pascal was coming up with a probabilistic approach to data, expressed in the now-famous Pascal’s Wager: If you don’t obey God and it later turns out God doesn’t exist, no biggie, but if there’s a chance God does exist, your belief could make the difference between an eternity in heaven or hell — so it’s worth your while to believe! (The philosopher of science Ian Hacking calls Pascal the world’s first statistician, and his wager “the first well-understood contribution to decision theory.”)
Just as importantly, Isaac Newton and Gottfried Wilhelm Leibniz were creating calculus, which gave humanity a new ability to figure out the maximum value you can achieve within given constraints — in other words, to optimize.
From the beginning, people saw optimization as a godly power.
In 1712, the mathematician Samuel König studied the complex honeycomb structure of a beehive. He wondered: Had bees figured out how to create the maximum number of cells with the minimum amount of wax? He calculated that they had. Those fuzzy, buzzy optimizers! The French Academy of Sciences was so impressed by this optimal architecture that it was declared proof of divine guidance or intelligent design.
Soon enough, people were trying to mathematize pretty much everything, from medicine to theology to moral philosophy. It was a way to give your claims the sheen of objective truth.
Take Francis Hutcheson, the Irish philosopher who first coined the classic slogan of utilitarianism — that actions should promote “the greatest happiness for the greatest number.” In 1725, he wrote a book attempting to reduce morality to mathematical formulas, such as:
The moral Importance of any Agent, or the Quantity of publick Good produced by him, is in a compound Ratio of his Benevolence and Abilitys: or (by substituting the Letters for the Words, as M = Moment of Good, and μ = Moment of Evil) M = B × A.
The utilitarian philosopher Jeremy Bentham, who followed in Hutcheson’s footsteps, also sought to create a “felicific calculus”: a way of determining the moral status of actions using math. He believed that actions are moral to the extent that they maximize happiness or pleasure; in fact, it was Bentham who actually invented the word “maximize.” And he argued that both ethics and economics should be about maximizing utility (that is, happiness or satisfaction): Just calculate how much utility each policy or action would produce, and choose the one that produces the most. That argument has had an enduring impact on moral philosophy and economics........
© Vox
