About Me

Hi, I am Alexey Turchin, x-risks researcher, creator of roadmaps, Russian outsider art collector, Founder of Russian Transhumanist Party, Founder of "Digital immortality Now" company.

Here are links on my main resources:

My Facebook:

Links on all my roadmaps:

My art collection:

My bio:

More about me:

My Startup:

My book on x-risks prevention:

Russian Transhumanist party

My slides and maps and some texts:

X-risks library

My Twitter:

Me on Youtube

My articles on LessWrong

Scientific publications:

Turchin A. The possible reasons of underestimation of risks of human extinction. // Problems of risk management and safety. T31, 2007, Moscow, PP 266-305
Turchin A. Natural disasters and antropic principle. // Problems of risk management and safety. T31, 2007, Moscow, PP 306–332.
Turchin A. Problems of sustainable development and perspectives of global catastrophes.// Social science and modern time. Moscow, 2010 N1 С. 156–163.

Contact me at alexeiturchin at gmail.com

The map of organizations, sites and people involved in x-risks prevention

Three known attempts to make a map of x-risks prevention in the field of science exist:

1. First is the list from the Global Catastrophic Risks Institute in 2012-2013, and many links there are already not working:

2. The second was done by S. Armstrong in 2014

3. And the most beautiful and useful map was created by Andrew Critch. But its ecosystem ignores organizations which have a different view of the nature of global risks (that is, they share the value of x-risks prevention, but have another world view).

In my map I have tried to add all currently active organizations which share the value of global risks prevention.

It also regards some active independent people as organizations, if they have an important blog or field of research, but not all people are mentioned in the map. If you think that you (or someone) should be in it, please write to me at alexei.turchin@gmail.com

I used only open sources and public statements to learn about people and organizations, so I can’t provide information on the underlying net of relations.

I tried to give all organizations a short description based on its public statement and also my opinion about its activity.

In general it seems that all small organizations are focused on their collaboration with larger ones, that is MIRI and FHI, and small organizations tend to ignore each other; this is easily explainable from the social singnaling theory. Another explanation is that larger organizations have a great ability to make contacts.

It also appears that there are several organizations with similar goal statements.

It looks like the most cooperation exists in the field of AI safety, but most of the structure of this cooperation is not visible to the external viewer, in contrast to Wikipedia, where contributions of all individuals are visible.

It seems that the community in general lacks three things: a united internet forum for public discussion, an x-risks wikipedia and an x-risks related scientific journal.

Ideally, a forum should be used to brainstorm ideas, a scientific journal to publish the best ideas, peer review them and present them to the outer scientific community, and a wiki to collect results.

Currently it seems more like each organization is interested in creating its own research and hoping that someone will read it. Each small organization seems to want to be the only one to present the solutions to global problems and gain full attention from the UN and governments. It raises the problem of noise and rivalry; and also raises the problem of possible incompatible solutions, especially in AI safety.

The pdf is here: http://immortality-roadmap.com/riskorg5.pdf

Fermi paradox of the human past

Based on known archaeological data, we are the first technological and symbol-using civilisation on Earth (but not the first tool-using species).
This leads to an analogy that fits Fermi’s paradox: Why are we the first civilisation on Earth? For example, flight was invented by evolution independently several times.
We could imagine that on our planet, many civilisations appeared and also became extinct, and based on mediocre principles, we should be somewhere in the middle. For example, if 10 civilisations appeared, we have only a 10 per cent chance of being the first one.

The fact that we are the first such civilisation has strong predictive power about our expected future: it lowers the probability that there will be any other civilisations on Earth, including non-humans or even a restarting of human civilisation from scratch. It is because, if there will be many civiizations, we should not find ourselves to be the first one (It is some form of Doomsday argument, the same logic is used in Bostrom's article “Adam and Eve”).

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type of a catastrophe. The catastrophe must be really huge: such as irreversible global warming, grey goo or black hole in a collider.

Now, I will list possible explanations of the Fermi paradox of human past and corresponding x-risks implications:

1. We are the first civilisation on Earth, because we will prevent the existence of any future civilisations.

If our existence prevents other civilisations from appearing in the future, how could we do it? We will either become extinct in a very catastrophic way, killing all earthly life, or become a super-civilisation, which will prevent other species from becoming sapient. So, if we are really the first, then it means that "mild extinctions" are not typical for human style civilisations. Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main possible methods of human extinction.

If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be able to create new sapient species. Or, it may be that we care about biosphere so strongly, that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It means that past civilisations on Earth may have existed, but decided to hide all traces of their existence from us, as it would help us to develop independently. So, the fact that we are the first raises the probability of a very large scale catastrophe in the future, like UFAI, or dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or nuclear war. Another explanation is that any first civilisation exhausts all resources which are needed for a technological civilisation restart, such as oil, ores etc. But, in several million years most such resources will be filled again or replaced by new by tectonic movement.

2. We are not the first civilisation.

2.1. We didn't find any traces of a previous technological civilisation, yet based on what we know, there are very strong limitations for their existence. For example, every civilisation makes genetic marks, because it moves animals from one continent to another, just as humans brought dingos to Australia. It also must exhaust several important ores, create artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on Earth in last 10 million years.

But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like 60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have happened, because we should find traces mostly as exhausted oil reserves. The main counter argument here is that cephalisation, that is the evolutionary development of the brains, was not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were very small. But, bird’s brains are more mass effective than mammalians. All these arguments in detail are presented in this excellent article by Brian Trent “Was there ever a dinosaurian civilisation”?

The main x-risks here are that we will find dangerous artefacts from previous civilisation, such as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases the chances that it is typical for civilisations to become extinct. It also means that there was some reason why an extinction occurred, and this killing force may be still active, and we could excavate it. If they existed recently, they were probably hominids, and if they were killed by a virus, it may also affect humans.

2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy their civilisation. The same is true for Neanderthals and Homo Florentines.

2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.

2.4. They are still here, but they try not to intervene in human history. So, it is similar to Fermi’s Zoo solution.

2.5. They were a non-tech civilisation, and that is why we can’t find their remnants.

2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they don’t create tech.

2.7 Some groups of humans created advanced tech long before now, but prefer to hide it. Highly improbable as most tech requires large manufacturing and market.

2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because of cannibalism. The fact is - several hominid species went extinct in the last several million years.

3. Civilisations are rare

Millions of species existed on Earth, but only one was able to create technology. So, it is a rare event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will be resurrected by another civilisation on Earth is small.

The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are also small (as such catastrophes are atypical for civilisations and they quickly proceed to total annihilation or singularity).

It also means that technological intelligence is a difficult step in the evolutionary process, so it could be one of the solutions of the main Fermi paradox.

Safety of remains of previous civilisations (if any exist) depends on two things: the time distance from them and their level of intelligence. The greater the distance, the safer they are (as the biggest part of dangerous technology will be destructed by time or will not be dangerous to humans, like species specific viruses).

The risks also depend on the level of intelligence they reached: the higher intelligence the riskier. If anything like their remnants are ever found, strong caution is recommend.

For example, the most dangerous scenario for us will be one similar to the beginning of the book of V. Vinge “A Fire upon the deep.” We could find remnants of a very old, but very sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.

The most likely place for such artefacts to be preserved is on the Moon, in some cavities near the pole. It is the most stable and radiation shielded place near Earth.

I think that based on (no) evidence, estimation of the probability of past tech civilisation should be less than 1 per cent. While it is enough to think that they most likely don’t exist, it is not enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.

Meta: the main idea for this post came to me in a night dream, several years ago.

The map of natural global risks

There are many natural global risks. The greatest of these known risks are asteroid impacts and supervolcanos.

Supervolcanos seem to pose the highest risk, as we sit on the ocean of molten iron, oversaturated with dissolved gases, just 3000 km below surface and its energy slowly moving up via hot spots. Many past extinctions are also connected with large eruptions from supervolcanos.

Impacts also pose a significant risk. But, if we project the past rate of large extinctions due to impacts into the future, we will see that they occur only once in several million years. Thus, the likelihood of an asteroid impact in the next century is an order of magnitude of 1 in 100 000. That is negligibly small compared with the risks of AI, nanotech, biotech, etc.

The main natural risk is a meta-risk. Are we able to correctly estimate natural risks rates and project them into the future? And also, could we accidentally unleash natural catastrophe which is long overdue?

There are several reasons for possible underestimation, which are listed in the right column of the map.

1. Anthropic shadow that is survival bias. This is a well-established idea by Bostrom, but the following four ideas are mostly my conclusions from it.

2. It is also the fact that we should find ourselves at the end of period of stability for any important aspect of our environment (atmosphere, sun stability, crust stability, vacuum stability). It is true if the Rare Earth hypothesis is true and our conditions are very unique in the universe.

3. From (2) is following that our environment may be very fragile for human interventions (think about global warming). Its fragility is like fragility of an overblown balloon poked by small needle.

4. Also, human intelligence was best adaptation instrument during the period of intense climate changes, which quickly evolved in an always changing environment. So, it should not be surprising that we find ourselves in a period of instability (think of Toba eruption, Clovis comet, Young drias, Ice ages) and in an unstable environment, as it help general intelligence to evolve.

5. Period of changes are themselves marks of the end of stability periods for many process and are precursors for larger catastrophes. (For example, intermittent ice ages may precede Snow ball Earth, or smaller impacts with comets debris may precede an impact with larger remnants of the main body).

Each of these five points may raise the probability of natural risks by order of magnitude in my opinion, which combined will result in several orders of magnitude, which seems to be too high and probably is "catastrophism bias".

(More about it is in my article “Why anthropic principle stopped to defend us” which needs substantial revision)

In conclusion, I think that when studying natural risks, a key aspect we should be checking is the hypothesis that we live in non-typical period in a very fragile environment.

For example, some scientists think that 30 000 years ago, a large Centaris comet broke into the inner Solar system, split into pieces (including Encke comet and Taurid meteor showers as well as Tunguska body) and we live in the period of bombardment which has 100 times more intensity than average. Others believe that methane hydrates are very fragile and small human warming could result in dangerous positive feed back.

I tried to list all known natural risks (I am interested in new suggestions). I divided them into two classes: proven and speculative. Most speculative risks are probably false.

Most probable risks in the map are marked red. My crazy ideas are marked green. Some ideas come from obscure Russian literature. For example, an idea, that hydro carbonates could be created naturally inside Earth (like abiogenic oil) and large pockets of them could accumulate in the mantle. Some of them could be natural explosives, like toluene, and they could be cause of kimberlitic explosions. http://www.geokniga.org/books/6908 While the fact of kimberlitic explosion is well known and their energy is like impact of kilometer sized asteroids, I never read about contemporary risks of such explosions.

The pdf of the map is here: http://immortality-roadmap.com/naturalrisks11.pdf

A human sim as an AI controller.

I don't think that we are only arguing semantic, but the idea of scanning a human is not my only one idea and is not the best idea of AI safety. It is just interesting promising idea.

In one Russian short story a robot was asked to get rid of all circular objects in the room and the robot cut the owner's head. But if the robot had a simulation of right moral human he could run it thousands times a second and check any his action with it.

The the first difference between sim and a human operator is that the sim can be run infinitely many more times and very cheaply, so we could check each robot action.

The second difference is that a human operator is changing (could be replaced by other operator or aging or suffer stroke). The sim is not changing with such speed (but he could still learn, or for simple situation fixed copies could be used)

The 3d is that sim is specially selected and trained for many years and later tested as a sim in many imaginary situations. We can't test operator with such rigorous quality.

Four. The sim could control any infinitely small update of AI's utility function, because it runs on the same speed. It prevents AI from becoming malicious. I have to explain it: our typical discussions about malicious AI have a logical loop (first shown by Ben Goertzel)

The loop: To start hide his utility function from peers, the AI should become malicious. But he can't covertly become malicious without secretly changing his utility function. In other words, between the moment where AI decides to be be bad, and the moment when he start to act base on this decision, is a short time gap. This gap maybe very short in time like 0,01 sec, so no human operator has time to react. But sim will have time (especially if the AI will be halted after any update in his open utility function).

Another explanation of the gap: if all values of an AI are written in open from (in some stack), and he creates the value to hide his values, than the value "hide values" is also written in open form (for short time). So if the value stack is translated outside, there is no way for AI to hide his transformation from the peer. (except to kill him before he get all implications)

Five. The sim not only check values but also correct understanding of human language and context.


How the universe appeared from nothing?

So, the map about "how something appeared from nothing" is ready. While the question is simple and by definition its answer can't be long, the map has unexpected length and complexity.

I have spent two weeks looking into nothing. I have dreams that I don't exist, that I am just (part of) Boltzmann brain in Pi or a blot of color is space of possible colors. I hope there is another mind in the universe (if it exist actually) that will enjoy the map.
The full text explanation is here http://lesswrong.com/r/discussion/lw/nw7/the_map_of_ideas_how_the_universe_appeared_from/

Slow shutdown of the simulation

Just got new idea of possible end of the world, that is "Slow simulation shutdown"

TL;DR: Shut down of the simulation may be observed and may be unpleasant and it is especially likely if there are infinitely many simulations. It will look like very strange global catastrophe from the observer point of view.

There is a possibility of something like quantum immortality for many simulation world. Thats is if we have many identical simulations and some of them will shut down, nobody will feel it and it will not have any observable consequences.

Lets name it "simulation immortality" as there is nothing quantum about it. I think that it may be true. But it requires two conditions: many simulations and identity problem solution (is a copy of me in remote part of the universe is me.) I wrote about it here:http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/

Simulation immortality precisely neutralise risks of the simulation been shut down. But if we agree with quantum immortality logic, it works even broader, preventing any other x-risk, because in any case one person (observer in his timeline) will survive.

In case of simulation shutdown it works nicely if it will be shutdown instantaneous and uniformly. But if servers will be shut one by one, we will see how stars will disappear, and for some period of time we will find ourselves in strange and unpleasant world. Shut down may take only a millisecond in base universe, but it could take long time in simulated, may be days.

Slow shutdown is especially unpleasant scenario for two reasons connected with "simulation immortality".

- Because of simulation immortality, its chances are rising dramatically: If for example 1000 simulations is shutting down and one of them is shutting slowly, I (my copy) will find my self only in the one that is in slow shut down.

- If I find my self in a slow shutdown, there is infinitely many the same simulations, which are also experience slow shutdown. In means that my slow shutdown will never end from my observer point of view. Most probably after all this adventure I will find my self in the simulation which shutdown was stopped or reversed, or stuck somewhere in the middle.

See simulation map here: http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Global warming prevention plan

TL;DR: Small probability of runaway global warming requires preparation of urgent unconventional measures of its prevention that is sunlight dimming.


The most expected version of limited global warming of several degrees C in 21 century will not result in human extinction, as even the thawing after Ice Age in the past didn’t have such an impact.

The main question of global warming is the possibility of runaway global warming and the conditions in which it could happen. Runaway warming means warming of 30 C or more, which will make the Earth uninhabitable. It is unlikely event but it could result in human extinction.

Global warming could also create some context risks, which will change the probability of other global risks.

I will not go here in all details about nature of global warming and established ideas about its prevention as it has extensive coverage in Wikipedia (https://en.wikipedia.org/wiki/Global_warming and https://en.wikipedia.org/wiki/Climate_change_mitigation).

Instead I will concentrate on heavy tails risks and less conventional methods of global warming prevention.

The map provides summary of all known methods of GW prevention and also of ideas about scale of GW and consequences of each level of warming.

The map also shows how prevention plans depends of current level of technologies. In short, the map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.

The following post consists of text wall and the map, which are complimentary: the text provides in depths details about some ideas and the map gives general overview of the prevention plans.

The map: http://immortality-roadmap.com/warming3.pdf


The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about its exact scale and timing, and especially about low probability tails with high consequences. In the case of risk analysis we can’t ignore these tails as they bear the most risk. So I will focus mainly on the tails, but this in turn requires a focus on more marginal, contested or unproved theories.

These uncertainties are especially large if we make projections for 50-100 years from now; they are connected with the complexity of the climate, the unpredictability of future emissions and the chaotic nature of the climate.

Clathrate methane gun

An unconventional but possible global catastrophe accepted by several researchers is a greenhouse catastrophe named the “runaway greenhouse effect”. The idea is well covered in wikipedia https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis

Currently large amounts of methane clathrate are present in the Arctic and since this area is warming quickly than other regions, the gasses could be released into the atmosphere. https://en.wikipedia.org/wiki/Arctic_methane_emissions

Predictions relating to the speed and consequences of this process differ. Mainstream science sees the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in global temperature, which seems bad but it is survivable. It will also take thousands of years.

It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum (PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane clathrate deposits were around 10 times smaller than they are at present because the ocean was warmer. This means that if the clathrate gun fires again it could result in much more severe consequences.

But some scientists think that it may happen quickly and with stronger effects, which would result in runaway global warming, because of several positive feedback loops. See, for example the blog http://arctic-news.blogspot.ru/

There are several possible positive feedback loops which could make methane-driven warming stronger:

1) The Sun is now brighter than before because of stellar evolution. The increase in the Sun’s luminosity will eventually result in runaway global warming in a period 100 million to 1 billion years from now. The Sun will become thousand of times more luminous when it becomes a red giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans

2) After a long period of a cold climate (ice ages), a large amount of methane clathrate accumulated in the Arctic.

3) Methane is short living atmospheric gas (seven years). So the same amount of methane would result in much more intense warming if it is released quickly, compared with a scenario in which it is scattered over centuries. The speed of methane release depends on the speed global warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of the methane.

4) Water vapor is the strongest green house gas and more warming results in more water vapor in the atmosphere.

5) Coal burning resulted in large global dimming https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could stop the masking of the global warming.

6) The ocean’s ability to solve CO2 falls with a rise in temperature.

7) The Arctic has the biggest temperature increase due to global warming, with a projected growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earth’s albedo which would result in higher temperatures. The same is true for permafrost and snow cover.

8) Warmer Siberian rivers bring their water into the Arctic ocean.

9) The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.

10) The current period of a calm, spotless Sun would end and result in further warming.

Anthropic bias

One unconventional reason for global warming to be more dangerous than we used to think is anthropic bias.

1. We tend to think that we are safe because not runaway global warming events have ever happened in the past. But we could observe only a planet where this never happened. Milan Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher. See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf

2. Also we, humans tend to find ourselves in a period when climate changes are very strong because of climate instability. This is because human intelligence as a universal adaptation mechanism was more effective in the period of instability. So climate instability helps to breed intelligent beings. (This is my idea and may need additional proof).

3. But if runaway global warming is long overdue this would mean that our environment is more sensitive even to smaller human actions (compare it with an over-pressured balloon and small needle). In this case the amount of CO2 we currently release could be such an action. So we could underestimate the fragility of our environment because of anthropic bias. (This is my idea and I wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defend-us-observation-selection-and-fragility-of-our-environment)

The timeline of possible runaway global warming

We could name the runaway global warming a Venusian scenario because thanks to a greenhouse effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo (0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).

A greenhouse catastrophe can consist of three stages:

1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of «a trigger point». We don’t where the tipping point is, we may have passed it already, conversely we may be underestimating natural self-regulating mechanisms.

2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it becomes, as its technologies will be damaged. But the later that global warming happens, the higher the tech will be that can be used to stop it.

3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even stronger and quicker positive feedback loop. A moist greenhouse will start if the average temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the oceans, resulting in 900 C surface temperatures. (https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet will boil, resulting in a dense water vapor atmosphere. See also here: https://en.wikipedia.org/wiki/Runaway_greenhouse_effect


If we survive until positive Singularity, global warming will be not an issue. But if strong AI and other super techs don’t arrive until the end of the 21st century, we wll need to invest a lot in its prevention, as the civilization could collapse before the creation of strong AI, which means that we will never be able to use all of its benefits.

I have a map, which summarizes the known ideas for global warming prevention and adds some new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf

The map has two main variables: our level of tech progress and size of the warming which we want to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the plans are:

No plan – do nothing, and just adapt to warming

Plan A – cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of investment and cooperation. Long term action and remote results.

Plan B – geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are possible. Quicker action and quicker results, but involves risks in the case of switching off.

Plan C – emergency actions for Sun dimming, like artificial volcanic winter.

Plan D – moving to other planets.

All plans could be executed using current tech levels and also at a high tech level through the use of nanotech and so on.

I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and it’s not working, because it is very expensive and requires cooperation from all sides. Even then it will not achieve immediate results and the temperature will still continue to rise for many other reasons.

Plan B is changing the opacity of the Earth’s atmosphere. It could be a surprisingly low cost exercise and could be operated locally made. There are suggestions to release something as simple as sulfuric acid into the upper atmosphere to raise its reflection abilities.

"According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft." https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/

There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the oceans and by dispersing the carbon capturing mineral olivine.

The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller catastrophe could result in the disruption of such engineering and the consequent immediate return of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html

There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more controversial and untested than geo-engineering.

A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere, “average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years afterward, the models suggest.”

http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warming-environment-science-climate-change/ Nuclear explosions in deep forests may have the same effect as attacks on cities in term of soot production.

Fighting between Plan A and Plan B

So we are not even close to being doomed by global warming but we may have to change the way we react to it.

While cutting emissions is important it will probably not work within a 10-20 year period, quicker acting measures should be devised.

The main risk is abrupt runaway global warming. It is low probability event with the highest consequences. To fight it we should prepare rapid response measures.

Such preparation should be done in advance, which requires expensive scientific experiments. The main problem here is (as always) funding, and regulators’ approval. The impact of sulfur aerosols should be tested. Complicated math models should be evaluated.

Contra-arguments are the following: “Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are safe? Of course, if we believe that history will end anyways within decades or centuries because of singularity, long-term effects of such measures may not matter so much… Another big issue with changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to start geo-engineering until they at least take simple measures to reduce their emissions.” (comments from Lesswrong discussion about GW).

Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geo-engineering), where plan A’s approval is winning. It has been suggested not to implement Plan B as an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions). Regulators didn’t approve even the smallest experiments with sulfur shielding in Britain. Iron ocean seeding also has regulatory problems.

But the same logic works in the opposite direction. China and the coal companies will not cut emissions, because they want to press policymakers to implement plan B. It looks like a prisoner’s dilemma of two plans.

The difference between the two plans is that plan A will return everything to its natural state and plan B is aimed on creating instruments to regulate the planet’s climate and weather.

In the current global political situation, cutting emissions is difficult to implement because it requires collaboration between many rival companies and countries. If several of them defect (most likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work, even if all of Europe were solar powered.

Transition to zero-emission economy could happen naturally in 20 years after electric transportation will become widespread as well as solar energy.

Plan C should be implemented if the situation suddenly changes for the worse, with the temperature jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt other geo-engineering measures.

I would also advocate for a mixture of both plans, because they work on different timescale. Cutting emissions and removing CO2 using the current level of technologies would take decades to have an impact on the climate. But geo-engineering has a reaction time of around one year so we could use it to cover the bumps in the road.

Especially important is the fact that if we completely stop emissions, we could also stop global dimming from coal burning which would result in a 3 C global temperature jump. So stopping emissions may result in a temperature jump, and we need a protection system in this case.

In all cases we need to survive until stronger technologies develop. Using nanotech or genetic engineering we could solve the warming problem with less effort. But we have to survive until this time.

It seems to me that the idea of cutting emissions is overhyped and solar management is "underhyped" in terms of public opinion and funding. By changing that misbalance we could achieve more common good.

An unpredictable climate needs a quicker regulation system

The management of climate risks depends on their predictability and it seems that this is not very high. The climate is a very complex and chaotic system.

It may react unexpectedly in response to our own actions. This means that long-term actions are less favorable. The situation could change many times during their implementation.

The quick actions like solar shielding are better for management of poor predictable processes, as we can see the results of our actions and quickly cancel them or make them stronger if we don't like the results.

Context risks – influencing the probability of other global risks

Global warming has some context risks: it could slow tech progress, it could raise the chances of war (probably already happened in Syria because of draught http://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/), it could exacerbate conflicts between states about how to share recourses (food, water etc.) and about the responsibility for risk mitigation. All such context risks could lead to a larger global catastrophe.

Another context risk is that global warming is captures almost all the available public attention for global risks mitigation, and other more urgent risks may get less attention.

Many people think that runaway global warming constitutes the main risk of global catastrophe. Another group think it is AI, and there is no dialog between these two groups.

The level of warming which is survivable strongly depends of our tech level. Some combinations of temperature and moisture are non-survivable for human beings without air conditioning: If the temperature rises by 15 C, half of the population will be in a non-survivable environment

http://www.sciencedaily.com/releases/2010/05/100504155413.htm because very humid and hot air prevents cooling by perspiration and feels like a much higher temperature. With the current level of tech we could fight it, but if humanity falls to a medieval level, it would be much more difficult to recover in such conditions.

Rising CO2 levels could also impair human intelligence and slow tech progress as CO2 levels near 1000 ppm are known to have negative effects on cognition.

Warming may also result in large hurricanes. They can appear if the sea temperature reaches 50 C and they have a wind speed of 800 km/h, which is enough to destroy any known human structure. They will be also very stable and live very long, thus influencing the atmosphere and creating strong winds all over the world. The highest sea temperature currently is around 30C.


In fact we should compare not the magnitude but speed of global warming with the speed of tech progress. If the warming is quicker it wins. If we have very slow warming, but even slower progress, the warming still wins. In general I think that progress will overrun warming, and we will create strong AI before we have to deal with serious global warming consequences.

Different predictions

Multiple people predict extinction due to global warming but they are mostly labeled as “alarmists” and are ignored. Some notable predictions:

1. David Auerbach predicts that in 2100 warming will be 5 C and combined with resource depletion and overcrowding it will result in global catastrophe. http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankind-extinct-100-years-climate-change-warns-expert.html

2. Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-a-decade.html

3. Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst case emission scenario and worst climate sensitivity to them: https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios

4. The consensus of scientists is that climate tipping point will be in 2200 http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-2200-2012967.html

5. If humanity continues to burn all known carbon sources it will result in a 10 C warming by 2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turn-earths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars or other smaller catastrophes which will permit the existence of regional powers which often smash each other into ruins and then rebuild using coal energy. Something like global nuclear “Somali world”.

We should give more weight to less mainstream predictions, because they describe heavy tails of possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global warming.

Identity map

“Identity” here refers to the question “will my copy be me, and if yes, on which conditions?” It results in several paradoxes which I will not repeat here, hoping that they are known to the reader.

Identity is one of the most complex problems, like safe AI or aging. It only appears be simple. It is complex because it has to answer the question: “Who is who?” in the universe, that is to create a trajectory in the space of all possible minds, connecting identical or continuous observer-moments. But such a trajectory would be of the same complexity as all space of possible minds, and that is very complex.

There have been several attempts to dismiss the complexity of the identity problem, like open individualism (I am everybody) or zero-individualism (I exist only now). But they do not prevent the existence of “practical identity” which I use when planning my tomorrow or when I am afraid of future pain.

The identity problem is also very important. If we (or AI) arrive at an incorrect solution, we will end up being replaced by p-zombies or just copies-which-are-not-me during a “great uploading”. It will be a very subtle end of the world.

The identity problem is also equivalent to the immortality problem. if I am able to describe “what is me”, I would know what I need to save forever. This has practical importance now, as I am collecting data for my digital immortality (I even created a startup about it and the map will be my main contribution to it. If I solve the identity problem I will be able to sell the solution as a service http://motherboard.vice.com/read/this-transhumanist-records-everything-around-him-so-his-mind-will-live-forever)

So we need to know how much and what kind of information I should preserve in order to be resurrected by future AI. What information is enough to create a copy of me? And is information enough at all?

Moreover, the identity problem (IP) may be equivalent to the benevolent AI problem, because the first problem is, in a nutshell, “What is me” and the second is “What is good for me”. Regardless, the IP requires a solution of consciousness problem, and AI problem (that is solving the nature of intelligence) are somewhat similar topics.

I wrote 100+ pages trying to solve the IP, and became lost in the ocean of ideas. So I decided to use something like the AIXI method of problem solving: I will list all possible solutions, even the most crazy ones, and then assess them.

The following map is connected with several other maps: the map of p-zombies, the plan of future research into the identity problem, and the map of copies. http://lesswrong.com/lw/nsz/the_map_of_pzombies/

The map is based on idea that each definition of identity is also a definition of Self, and it is also strongly connected with one philosophical world view (for example, dualism). Each definition of identity answers a question “what is identical to what”. Each definition also provides its own answers to the copy problem as well as to its own definition of death - which is just the end of identity – and also presents its own idea of how to reach immortality.

So on the horizontal axis we have classes of solutions:

“Self" definition - corresponding identity definition - philosophical reality theory - criteria and question of identity - death and immortality definitions.

On the vertical axis are presented various theories of Self and identity from the most popular on the upper level to the less popular described below:

1) The group of theories which claim that a copy is not original, because some kind of non informational identity substrate exists. Different substrates: same atoms, qualia, soul or - most popular - continuity of consciousness. All of them require that the physicalism will be false. But some instruments for preserving identity could be built. For example we could preserve the same atoms or preserve the continuity of consciousness of some process like the fire of a candle. But no valid arguments exist for any of these theories. In Parfit’s terms it is a numerical identity (being the same person). It answers the question “What I will experience in the next moment of time"

2) The group of theories which claim that a copy is original, if it is informationally the same. This is the main question about the required amount of information for the identity. Some theories obviously require too much information, like the positions of all atoms in the body to be the same, and other theories obviously do not require enough information, like the DNA and the name.

3) The group of theories which see identity as a social phenomenon. My identity is defined by my location and by the ability of others to recognise me as me.

4) The group of theories which connect my identity with my ability to make plans for future actions. Identity is a meaningful is part of a decision theory.

5) Indirect definitions of self. This a group of theories which define something with which self is strongly connected, but which is not self. It is a biological brain, space-time continuity, atoms, cells or complexity. In this situation we say that we don’t know what constitutes identity but we could know with what it is directly connected and could preserve it.

6) Identity as a sum of all its attributes, including name, documents, and recognition by other people. It is close to Leibniz’s definition of identity. Basically, it is a duck test: if it looks like a duck, swims like a duck, and quacks like a duck, then it is probably a duck.

7) Human identity is something very different to identity of other things or possible minds, as humans have evolved to have an idea of identity, self-image, the ability to distinguish their own identity and the identity of others, and to predict its identity. So it is a complex adaptation which consists of many parts, and even if some parts are missed, they could be restored using other parts.

There also a problem of legal identity and responsibility.

8) Self-determination. “Self” controls identity, creating its own criteria of identity and declaring its nature. The main idea here is that the conscious mind can redefine its identity in the most useful way. It also includes the idea that self and identity evolve during differing stages of personal human evolution.

9) Identity is meaningless. The popularity of this subset of ideas is growing. Zero-identity and open identity both belong to this subset. The main contra-argument here is that if we cut the idea of identity, future planning will be impossible and we will have to return to some kind of identity through the back door. The idea of identity comes also with the idea of the values of individuality. If we are replaceable like ants in an anthill, there are no identity problems. There is also no problem with murder.

The following is a series of even less popular theories of identity, some of them I just constructed ad hoc.

10) Self is a subset of all thinking beings. We could see a space of all possible minds as divided into subsets, and call them separate personalities.

11) Non-binary definitions of identity.

The idea that me or not-me identity solutions are too simple and result in all logical problems. if we define identity continuously, as a digit of the interval (0,1), we will get rid of some paradoxes and thus be able to calculate the identity level of similarity or time until the given next stage could be used as such a measure. Even a complex digit can be used if we include informational and continuous identity (in a Parfit meaning).

12) Negative definitions of identity: we could try to say what is not me.

13) Identity as overlapping observer-moments.

14) Identity as a field of indexical uncertainty, that is a group of observers to which I belong, but can’t know which one I am.

15) Conservative approach to identity. As we don’t know what identity is we should try to save as much as possible, and risk our identity only if it is the only means of survival. That means no copy/paste transportation to Mars for pleasure, but yes if it is the only chance to survive (this is my own position).

16) Identity as individuality, i.e. uniqueness. If individuality doesn’t exist or doesn’t have any value, identity is not important.

17) Identity as a result of the ability to distinguish different people. Identity here is a property of perception.

18) Mathematical identity. Identity may be presented as a number sequence, where each number describes a full state of mind. Useful toy model.

19) Infinite identity. The main idea here is that any mind has the non-zero probability of becoming any other mind after a series of transformations. So only one identity exists in all the space of all possible minds, but the expected time for me to become a given person is dramatically different in the case of future me (1 day) and a random person (10 to the power of 100 years). This theory also needs a special version of quantum immortality which resets “memories” of a dying being to zero, resulting in something like reincarnation, or an infinitely repeating universe in the style of Nietzsche's eternal recurrence.

20) Identity in a multilevel simulation. As we probably live in a simulation, there is a chance that it is multiplayer game in which one gamer has several avatars and can constantly have experiences through all of them. It is like one eye through several people.

21) Splitting identity. This is an idea that future identity could split into several (or infinitely many) streams. If we live in a quantum multiverse we split every second without any (perceived) problems. We are also adapted to have several future copies if we think about “me-tomorrow” and “me-the-day-after-tomorrow”.

This list shows only groups of identity definitions, many more smaller ideas are included in the map.

The only rational choice I see is a conservative approach, acknowledging that we don’t know the nature of identity and trying to save as much as possible of each situation in order to preserve identity.

The pdf: http://immortality-roadmap.com/identityeng8.pdf

The map of p-zombies

No real p-zombies exist in any probable way, but a lot of ideas about them have been suggested. This map is the map of ideas. It may be fun or may be useful.

The most useful application of p-zombies research is to determine whether we could loose something important during uploading.

We have to solve the problem of consciousness before we will be uploaded. It will be the most stupid end of the world: everybody is alive and happy but everybody is p-zombie.

Most ideas here are from Stanford Encyclopedia of Philosophy, Lesswrong wiki, Rational wiki, recent post of EY and from works of Chalmers and Dennett. Some ideas are mine.

The pdf is here.