Do you want to live forever?
No? Okay, let me phrase it another way. Do you want to live tomorrow?
Most people answer yes to this second question, even if they said no to the first. (If you didn’t say yes to the second, that’s typically called suicidal ideation, and there are hotlines for that.)
This doesn’t quite make sense to me. If I came to you tomorrow, and I asked the same question, “Do you want to live tomorrow?”, you’d probably still say yes; likewise with the day after that, and the day after that, and the day after that. Under normal circumstances, you’ll probably keep saying yes to that question forever. So why don’t you want to live forever?
Maybe, you think that the question “do you want to live forever” implies “do you want to be completely incapable of dying, and also, do you want to be the only immortal person around”. Not being able to die, ever, could be kind of sucky, especially if you continued to age. (There was a Greek myth about that.) Further, being the only person among those you care about who can’t die would also suck, since you’d witness the inevitable end of every meaningful relationship you had.
But these sorts of arbitrary constraints are the realm of fiction. First, if a scientist invented immortality, there would be no justifiable reason that it wouldn’t be as available to those you care about as it would be to you. Second, it’s a heck of a lot easier to just stop people from aging than it is to altogether make a human completely impervious to anything which might be lethal. When I say “yes” to “do you want to live forever”, it’s induction on the positive integers, not a specific vision whose desire spans infinity.
Even after I’ve made sure we’re on the same page as to what exactly real immortality might look like, some people still aren’t convinced it would be a good idea. A decent amount of the arguments are some variant on “death gives meaning to life”.
To this, I’ll borrow Eliezer Yudkowsky’s allegory: if everybody got hit on the head with a truncheon once a week, soon enough people would start coming up with all sorts of benefits associated with it, like, it makes your head stronger, or it makes you appreciate the days you’re not getting hit with a truncheon. But if I took a given person who was not being hit on the head with a truncheon every week, and asked them if they’d like to start, for all these amazing benefits, I think they’d say no. Wouldn’t you?
People make a virtue of necessity. They’d accept getting hit on the head with a truncheon once a week, just as they now accept the gradual process of becoming more and more unable to do things they enjoy, being in pain more often than not, and eventually ceasing to exist entirely. That doesn’t make it a good thing, it just demonstrates peoples’ capacity for cognitive dissonance.
These are the reasons I’ve made it my goal to cure mortality. The motivation is extremely similar to anyone’s motivation to cure any deadly disease. Senescence is a terminal illness, which I would like to cure.
It disrupts the natural order, but so does curing any other disease. Cholera was the natural order for thousands of years, but we’ve since decided it’s bad, and nowadays nobody is considering the idea of mixing sewage with drinking water to bring it back. There were tons of diseases that were part of the natural order right up until we eradicated them. We don’t seem to have any trouble, as a society, deciding that cancer is bad. But death itself—the very thing we’re trying to prevent by curing all these diseases—is somehow not okay to attack directly.
Here’s the bottom line. I know for a fact I’m not the only one with this goal. Some of the people at MIRI come to mind, as well as João Pedro de Magalhães. I’d personally love to contribute to any of these causes. If you know someone, or are someone, who’s working towards this goal, I’d love to join you.
I was someone like that. But I now think we’re out of time. Getting AI right is now the most important thing. Of course there are still people dying and thus lives to be saved, and there are many biologists working on anti-senescence, rejuvenation, and so forth. But AI is going to rule the world, and its imperatives are likely to be what determines our fate.
I agree with you that AI is extremely important. That’s why I’m carefully following AI research and working on ML myself (see my PDP updates for notes on this: http://www.jenyalestina.com/blog/2019/02/26/pdp-5/).
When I say I want to cure mortality, this is a terminal value. I want it done as soon as possible, no matter how. If it gets done by biologists in laboratories, fine; if it gets done by a Friendly AI that somebody programs in their basement, fine. I agree that AI is probably going to rule the world, but when it rules the world, one of the first things I would like it to do is cure mortality.
(I didn’t check this page for a while…)
Let’s consider for a moment what “curing mortality” actually entails. There are a lot of ways (though I guess only a finite number of ways, no matter how numerous) in which a person may die. I would say that all the syndromes which make up the ageing process are the core issue. If they could be cured, then you could have rejuvenation and indefinitely prolonged youth. But of course, even a physically youthful human being can end up dying in various ways, either because of conditions which do fall within the scope of medicine (e.g. disease, traumatic injury), or because of external circumstances (accident, suicide, murder, natural disaster, civilizational collapse and other ‘existential risks’).
Also, the longer something lasts, the more opportunity there is for fatal damage to occur. I am frankly not at all convinced that human beings, or something that grew out of humanness, would want to live e.g. for a million years. That is longer than all recorded history by several orders of magnitude, and one may simply run out of things to care about or do. Anyway, my point is that while you can develop medical technology to repair and prevent, the human body still has multiple vulnerabilities and over really long periods of time, the likelihood of a fatal combination of circumstances (meaning that you die of something which in principle could be fixed, but which in practice you didn’t get to fix) keeps increasing.
This argument spirals out into uncertainties about increasingly radical alternatives to the human form, the long-term physical behavior of the universe, whether you can get the cumulative probability of death as time approaches infinity to converge on something less than probability 1, and so on. I don’t know if you’ve thought this far ahead, but these are the further issues that open up when you talk of literally “curing mortality”, rather than, say, aiming to make comprehensive rejuvenation possible.
At the same time, I do think that the human race is kind of mad to show so little interest in even radical life extension, although there are undoubtedly reasons for this – religion is an easier way out; people often hate their life anyway; and it takes a certain combination of joie de vivre, audacity, and sense of human potential (or just one’s own potential) to imagine changing the biological human condition in such a fundamental way, a combination which may actually be rather rare.
All that said, there are any number of research groups and activist communities addressing some aspect of human longevity. “Longevity” should serve as a good keyword to find some of them, e.g. on Facebook. So I’d suggest looking for those groups, if you really want to be a part of this.
But as I implied, I think superhuman AI is historically imminent – possibly just a few years away, now that we have cloud computing and huge data centers; we probably have the necessary hardware, it’s just a matter of the right algorithms being run. And if you’re specifying an AI’s value system with a view to creating a friendly singularity, it’s not enough to just say e.g. “people should be able to live” – because what’s a person, and what alterations of a person fall outside what is desirable? The accidentally unfriendly AI might immortalize us all by freezing us all and never thawing us, etc. I’m sure you’re very familiar with the idea that there are such pitfalls.
If we want a friendly singularity, I really see no alternative to identifying the “right value system”, or at least *an* AI value system that is “friendly to humans” (and their aspirations) in a way more precisely specified, and more deeply embedded in reality, than any human code of ethics has ever been. Back in the days before Less Wrong, the idea was that true human-friendly values might somehow be obtained through cognitive neuroscience – by identifying with scientific exactitude what kind of decision system the human brain employs, and what its terminal values (or whatever the appropriate concept is) genuinely are. That would be the true code of human ethics. I suppose this kind of thinking is still implicit e.g. in the Paul Christiano school of having ML infer true human values through close observation of human decisions…
Unlike the quest for longevity, it is a little less clear to me who is doing the crucial work in the area of friendly singularity. Obviously MIRI, and a few other people and organizations which are well-known as being interested in this most advanced form of AI safety. But there may be essential theoretical ingredients which haven’t yet been integrated into their concept of what has to be done, and which are being done elsewhere.
Anyway, this is just an off-the-cuff sketch of how I think about these things. I’m trying to be helpful but also to pull no punches. Good luck in finding your best niche from which to contribute.