Report for Open Philanthropy examining what I see as the core argument for concern about existential risk from misaligned artificial intelligence.
Infinities puncture the dream of a simple, bullet-biting utilitarianism. But they’re everyone’s problem.
There are oceans we have barely dipped a toe into. There are drums and symphonies we can barely hear. There are suns whose heat we can barely feel on our skin.
Making happy people is good. Just ask the golden rule.
An intuition pump for a certain kind of “holy sh**” reaction to existential risk, and to the possible size and quality of the future at stake.
I think that you can “control” events you have no causal interaction with, including events in the past, and that this is a wild and disorienting fact, with uncertain but possibly significant implications. This essay attempts to impart such disorientation.
If you kill something, look it in the eyes as you do.
How can “non-attachment” be compatible with care? We need to distinguish between caring and clinging.
You can’t keep any of it. The only thing to do is to give it away on purpose.