Should Effective Altruists Join the GOP?
ColdButtonIssues and I discuss how to do the most good in the world and what the Effective Altruism movement gets wrong (and right!) about markets, culture, and religion
ColdButtonIssues is an up-and-coming substacker who writes about Effective Altruism. His work has received favorable attention from Scott Alexander (#18) and in EA forums, and I’m excited to welcome him to the newsletter.
For those who aren’t familiar, CB, could you give a brief outline of what Effective Altruism is about and why you specifically have taken an interest in it?
At its heart, I would say effective altruism is applied utilitarianism or some form of consequentialism: what should you do if you want to do the greatest good for the greatest number? Sometimes, people like to describe it in a toned down form by saying it’s how to do good the most effectively with the resources you’re willing to sacrifice. In the second definition, someone who donates 1% of their income or time but tries to have the largest positive impact would still count.
I think the role of utilitarians in EA is important to emphasize because there are lots of people who think they are doing the most good—but pursue it in very different ways like a lot of political ideologues or missionaries.
I was interested in utilitarianism from a pretty young age, although I wouldn’t call myself one, so when I saw the movement taking shape online I got involved through an EA-related cause when I was still in school.
In “Go Republican, Young EA!”, you write,
“Young effective altruists in the United States interested in using public policy to make the world better should almost all be Republicans. They should not be Democrats, they should not be Greens, they should be Republicans.”
What makes this decision so clear from an effective altruism perspective and what kind of pushback have you gotten in response?
The Republican Party, and the political right in general, is powerful and will almost certainly stay so. Right now, it’s struggling to recruit young Americans, especially with elite credentials to take positions. This means if you have elite credentials you’ll stand out and even if you don’t you can be pretty ambitious. A lot of Liberty University graduates got pretty cool positions in the Trump Administration. Because there is stigma against being on the right in the circles effective altruists travel in, people who are selfish might opt out of those roles for personal reasons. But if you’re willing to make personal sacrifices for the greater good, you can just accept the stigma and risk of social ostracism.
I think a lot of people agree with the post and have told me so. Pushback falls into a few buckets.
“The GOP is evil—how could you consider that?”
“Technically correct, but working for a group you dislike is so mentally taxing you won’t be successful.”
For the first objection, I don’t agree. I think there are a lot of things to dislike about both major parties—I don’t think the GOP is beyond the moral pale. The other argument I made and I still think is correct is that working within an institution you think is evil isn’t always wrong. You need to compare how much bad the party will do if you’re a part of it, to how much bad it will do without you. If you can slightly improve either major party’s approach to healthcare policy or occupational licensing or R&D or a bunch of important but not that visible policy areas, I think you can feel pretty good about your life.
For the second objection, I mostly agree. If you really despise a group, it’s hard to be a contributing member of it. And I’m not utilitarian enough to endorse deception, I like people being honest. If you are moderate or conservative or libertarian, I think trying to work in politics makes a lot of sense. I suppose people could try to change their mind about politics—I think utilitarianism doesn’t imply supporting the left in the United States—but people tend to be really committed to their political views.
I think you’re right about this, and in a way the case just gets stronger the worse you think the GOP is—almost by definition doing the most good means effectively targeting whatever is most pressing, persistent, and bad! Republicans are going to win a lot of elections essentially no matter what you do, so improving the party to be even slightly less bad [again, from that point of view] seems like a big improvement.
I do wonder though whether the perspective of an individual EA interested in policy is the right level of analysis. At present, the GOP fluctuates so wildly between policy priorities that even moderate or right-leaning EAs might be turned off by the thought of attempting serious policy work or activism in that environment. Granting that there’s still a lot of upside to an individual pushing currently fringe ideas in a high variance environment (more chance of a large unexpected swing toward your view), wouldn’t the biggest gains be likely to come from a more coordinated colonization of the GOP establishment by EA leadership? (AEI seems like it might be open to that kind of thing these days.)
I think wide-scale entry of EAs into the GOP probably won’t happen. Mostly because liberals are hyper-committed to their current politics and most EAs are liberal. And then some EAs have staked out extreme positions on things like farming or immigration relative to the Republican Party or even the United States, so some doors might be closed to them. That said, I think many EAs should join the GOP!
And I do think the American right is really open to defectors right now, even if they’ve taken anti-conservative positions before. James Lindsay has been warmly received by religious conservatives despite his vocal atheism. (I think this shows the weakness of the right that they are forced to accept pretty much everybody). Republicans are now big fans of electric car guy, Elon Musk.
An alternate take-away from your piece might be, “Ok maybe I don’t literally want to flip and become a Republican, but given how crowded the field is in Democratic policy making, I’ll likely have greater impact by abandoning the political route altogether.” Would you endorse this conclusion as well?
No, I don’t think so. The power of the US federal government is enormous. State governments control a lot of money and mostly run K-12 education, the criminal justice system, and a lot of other important things. I think if two equally talented EAs both go into politics, I expect the conservative or Republican one to do a lot more good. But I still think it’s possible, although less likely, for Democratic policy and politics to be a good choice especially if you’re working in important but less polarized areas. In theory, YIMBYism can appeal to people on the left and right. In practice, YIMBYs have focused on appealing to the left which is smart because the worst housing situations are in Democratic-run cities.
If you have a reasonable chance of getting to an influential role in policy, most people won’t make more impact outside of that role so I would recommend trying for it.
What I would say is that if you’re the ten millionth person on the bandwagon you probably won’t make much of a difference. So being a cheerleader for now really popular views like same-sex marriage or Medicare will probably be low-impact. Activism for fringe ideas is probably more worthwhile.
Definitely, policy is super important. But getting back to the individual’s decision, aren’t there a lot of opportunities to make an impact by working at a non-policy oriented non-profit or (gasp) a private firm? If you’re fresh out of ideas for directly making the world better through your work, maxing your earnings and donating still seems like it could do more good than being the large-Xth number Dem doing public policy.
I think in this case you’re implicitly (or if you’re hardcore EA explicitly) comparing what you would do in the position you take relative to what your replacement would do, in both the policy world and the non-profit non-policy world. So if you think you are unusually good at management working at a nonprofit or a firm could be the best thing to do, even if it’s not an explicitly EA organization. But I do think in policy, there are lots of issues that come up unexpectedly and it can be valuable to be the person with influence or the person who can directly make the decision. Thinking back to the Trump administration, I think Stephen Miller’s presence was very influential. If he had decided to work elsewhere I think history would be different. So I think replacing a generic Democratic staffer with a generic left-leaning EA does have real value. Imagine there’s another pandemic—maybe the staffer would have more sympathy for human challenge trials? Or if they are at the Department of State or USAID when foreign aid is being reviewed, that seems potentially important.
But climbing the ladder on the left seems harder, you’re really replaceable. So I agree working on standard left-wing policy priorities is probably inferior to just making money and giving it away (even if you’re confident the left is correct).
You’ve argued that the “effective altruism penumbra” is expanding, meaning that EAs are almost expected to share certain assumptions and cultural beliefs that may not have any necessary connection to the stated aim of doing the most good possible. What beliefs most trouble you within this penumbra and how might they stand in the way of achieving the movement’s core goals?
I think effective altruists are very WEIRD. They’re irreligious and to the left of the median American, but well to the right of the people who would identify as “leftists” rather than as liberal or progressive. They also really like to promote certain viewpoints even when they might not affect real world actions.
I think “longtermism” is pretty reasonable. I think basically nobody that isn’t already an EA will be motivated to change their behavior on the basis, that they might affect ten quadrillion human people. If you, like many effective altruists, think artificial intelligence could wipe out humans in the next few decades or biowarfare could bring down civilization, you can just say that. I think writing a thriller that would be sold in airport bookstores would motivate more people to worry about biorisks than expected value calculations ever will.
I think effective altruists try really hard to think carefully and confront their bias, and I think they’re more successful than most people are. I’m a fan of the movement and the people I know within it. But I think you’re still going to be rowing uphill if you’re working on a cause that’s coded as right-wing, even if you have good evidence behind you.
A more general criticism of utilitarians is I think there’s a tendency for smugness among utilitarians, I think I used to share this smugness. People will share very normal intuitions—killing is worse than letting die, friends and family matter way more than strangers, humans matter more than animals. Then utilitarians whip out canned thought experiments, do some calculations, and then urge people to follow the “math” and do the “objectively” correct thing.
But when utilitarian calculations would cut against the moral norms or intuitions of utilitarians themselves, all kinds of objections come out. If you think human life is good and saving human lives is good and you’re impartial between people, then encouraging people to have big families should be a very good thing to do. But some utilitarians will come up with all kinds of weird moral theories rather than say having kids is a good thing to do. Or many arguments against religious belief and practice feel very strained. So I think many utilitarians, including those in effective altruism, think they bite moral bullets but they don’t.
That last bit raises a great question—what untapped possibilities exist for EA with respect to religion?
Even looking at it from a purely secular vantage point, religious movements have driven a lot of the most transformative and lasting changes in the world and a fair amount of evidence links religious observance to some pretty good outcomes. Some specific religious institutions have impressive organizational capacity on a worldwide scale apart from just the general faith-based authority and goodwill within many communities. Is it crazy to envision EA teaming up with, say, the Latter-day Saints to do something really great?
I think religious institutions would be very unlikely and perhaps unwise to team up with EA formally. The culture and core values are too divergent. Is it man’s chief end to glorify God, and enjoy him forever or is to maximize something like quality-adjusted life-years or help colonize the stars?
There are some religious categories like “progressive Christians” that tend to be less interested in evangelism and more interested in activism, that I’ve heard people suggest as useful allies, but in my experience that subset would probably be uninterested in the quantitative approach EAs take.
But I think there are synergies and there already are religious EA affinity groups. I could imagine more Christians, for instance, looking to GiveWell when they make their charitable contributions. Or you could maybe make an EA case for religious evangelism—as a way to boost birth rates, or decrease loneliness, or increase happiness, or because you think it will maximize the convert’s chance at infinite bliss.
I think people should bite the bullet on Pascal’s Wager and go get baptized!
Now I’m going to propose some criticisms of effective altruism, and you can let me know what you think:
“I fear that the point of a young capitalist talking up Effective Altruism is to try to signal to socialists that you’re not as bad as a real capitalist”
How much of EA is just “libertarianism with extra steps”, and is something critical being lost by not making a more explicit case for markets?
I don’t spend any time worrying about how socialists think about me. I think effective altruists are pretty pro-market already.
A common effective altruist heuristic is to say that causes are more likely to be important when they’re neglected relevant to their importance. I think markets are great and pro-market advocacy is important. But I don’t think it’s neglected in general relevant to other things EAs care about. There’s libertarian and conservative intellectuals, much of the Republican Party, some of the Democratic Party, and so on who defend and promote markets. So I don’t think effective altruists should switch en masse to fighting for markets or capitalism or whatever. Some EAs work in niche areas like YIMBYism which is pro-market relative to the status quo.
Lant Pritchett has argued that despite the trendiness of randomized anti-poverty interventions in the third world, they ultimately don’t accomplish very much in terms of long-term development. Is EA overly invested in Givewell type charity work or is the extent of their support pretty well justified all things considered?
Full disclosure: My association with EA has been through US-based public policy work. I’m not a development economist.
I think the track record of GiveWell type charity work is a lot better than people promoting anti-poverty political reforms or political revolutions that often make things much, much worse.
In theory, I’m sure correct political reform or advocacy would do more to promote long-term development. But a lot of that stuff can backfire, pretty badly.
For a movement that aspires to establish long-term priorities on behalf of humans who will live many millions of years into the future, EA/rationalist culture can sometimes seem completely disconnected from how most people living today see the world (take for example these wedding vows). How important is this observation for evaluating EA’s prospects for success and how much can we ever really expect to know about what future generations will value?
In the short-term, it would be probably be better to just be normal if you want to influence others. But sometimes community norms that seem bizarre in the short-term are really influential in the long-term. In the 00’s there were all these feminist blogs that had very strange norms around language. But now such norms are pervasive almost everywhere I go. If you think your cultural values are way better than everyone else’s, sticking to them makes sense.
Personally, I’m pretty sympathetic to “normie values.” Most people should just go read Miss Manners instead of trying to express their own idiosyncrasies.
More philosophically, I’m really sympathetic to moral intuition, my own and others. My reservations about EAs and also for instance libertarians often come from them getting obsessed with one single moral intuition and throwing the rest away. If most people on earth think X is good and most past generations thought X is good, we should act like X is good unless we have overwhelming reason not to. For this reason, I think piety and filial piety are probably pretty good!
I tend to agree! Thanks CB, this has been a great conversation. To close, what advice would you give to readers who have taken these thoughts to heart and are seriously considering affiliating with the EA movement. Where should they start?
I just spent a lot of time criticizing the EA movement but I do think really highly of it, compared to almost any other movement.
Institutionally, effective altruism is now big enough to have resources to actively recruit new members and so it has official introductory materials. And if you’re a student or near a college town or big city, there are EA groups you can attend online or in person. The people I’ve met at these meet-ups have been nice. Personally, I just like to browse the EA forum which is a fun mix of esoteric philosophy and people asking for personal advice.
But if you’re just looking to donate money somewhere where it’s very likely to do real good, you can just look at who GiveWell recommends.
Thanks for reading Infovores Newsletter! Subscribe for free to receive new posts and support my work.
Cold Button Issues, you need a better name! I responded to an article about increasing fertility being welfare maximizing, and suggesting that utilitarians should be pro-life on one of his articles . Recently, I did something similar to what he discussed in the part about big families and advocated for being pro-life if you are utilitarian.  I seem to be getting a lot of intuitionist arguments: "This also implies [X]" with the implication that [X] is terrible or repugnant. But why not shut up and multiply? Someone give me some reasonable numbers becauase it's hard to see how pro-life doesn't win out.