Last week I had the chance to talk with Robin Hanson about AI, Age of Em, and telecommuting, among other topics. Here was my favorite bit,
Robin Hanson 53:57
So far, people have just made this sharp distinction between humans and machines and animals. And when they make that sharp distinction, they're willing to just be much more harsh in their treatment of non-humans than humans, right? And so simply focus on animals: the more you start to see animals as continuous with humans, say chimpanzees, then the less willing you are to tolerate harsh treatment of chimpanzees. And the more that you're going to insist that people be kind in various ways to the chimpanzees. Presumably, we're eventually going to do that for machines too. And so the question is, how will we draw the lines?
Either we just have some graded treatment where you can slowly treat them more harshly as they get different from humans, or we draw some sharp lines where past this line you can ignore it because that's nothing. And that's just going to be hard. Especially with AI because again, there's this much larger space of possible AI than there is for animals. Even the space of animals compared to humans is relatively limited compared to the space of possible AI. The nature of morality and the sacred is such that you tend to want to have relatively simple clean rules that everybody can observe and enforce. And the question is, well what can those be?
So there's two kinds of sets of rules, one set of rules is how not to be mean and cruel. And maybe another set of rules is how much to support and help. So even in the past, if you don't have any sort of welfare or obligation to save somebody else who's dying in the desert or something, you can still have norms about not mistreating them when you come across them, so it will be easier to just set up norms of avoiding mistreatment than it will be the norms of help. So a basic problem with ems and AI is it's just so easy to create them that if you have an obligation to help them once they're created, this is just unsupportable. It's just too big a demand. It's just so easy to create things that if there was an obligation to help them then they would just quickly suck up all available resources because they're so easy to create. So either you prevent their creation or you don't have such a strong obligation to help.
And,
Infovores 58:07
For a very long time in Western societies, the Bible has been a really foundational, the foundational book I would say, and that affects so much of how people think about things in ways both conscious and unconscious. Will the ems still have any connection to the Bible? Or will they perhaps develop an alternative bible of some sort? Will there be like a Good Samaritan story, except the good guy is the one who walks by the em on the road to Jericho without mistreating them?
The rest of that discussion is here. Elsewhere, I asked Robin why he would write a book elaborating the distant future in such seemingly historical detail. I think one possible answer is that if you have even a one-in-a-million chance of writing a book future ems might regard as their Bible, you should write that book.
An index of our discussion topics:
00:00-04:50, Long-term perspective on AI hype
04:15-13:00, Telework>AI?
13:30-21:50, Futurism isn’t about the Future
22:00-24:00, How “human” will the future be?
24:30-33:00, Will AI give good advice?
33:00-46:00, Inequality and the em welfare state
46:00-47:30, Brain freeze
47:30-52:00, The future is (still) female
52:00-1:03:00, Em morality and religion (interesting discussion on suicide also)
1:03:00-1:07:00, Does Robin use GPT?
1:07:00-1:11:00, Why Robin moved to Substack
Check out the whole thing here, and don’t forget to subscribe to Robin’s Overcoming Bias—a recent favorite is “Why is Everyone So Boring?”.