About Me

.
Hi, I am Alexey Turchin, x-risks researcher, creator of roadmaps, Russian outsider art collector, Founder of Russian Transhumanist Party, Founder of "Digital immortality Now" company.

Here are links on my main resources:

My Facebook:
https://www.facebook.com/turchin.alexei

Links on all my roadmaps:
http://immortality-roadmap.com/sample-page/

My art collection:
http://russian-outsider-art.com/

My bio:
http://ieet.org/index.php/IEET/bio/turchin/

More about me:
https://fromhumantogod.wordpress.com/about/

My Startup:
http://digital-immortality-now.com/

My book on x-risks prevention:
http://www.slideshare.net/avturchin/preventing-existential-risks-book

Russian Transhumanist party
https://www.facebook.com/groups/russiantranshumanistparty/

My slides and maps and some texts:
http://www.slideshare.net/avturchin/edit_my_uploads

X-risks library
https://fromhumantogod.wordpress.com/human-extinction-library/

My Twitter:
https://twitter.com/turchin

Me on Youtube
https://www.youtube.com/playlist?list=PLIeH71HlSWqucEJYPUmfeymD-H-pT6CQ2

My articles on LessWrong
http://lesswrong.com/user/turchin/submitted/

Scientific publications:

Turchin A. The possible reasons of underestimation of risks of human extinction. // Problems of risk management and safety. T31, 2007, Moscow, PP 266-305
Turchin A. Natural disasters and antropic principle. // Problems of risk management and safety. T31, 2007, Moscow, PP 306–332.
Turchin A. Problems of sustainable development and perspectives of global catastrophes.// Social science and modern time. Moscow, 2010 N1 С. 156–163.

Contact me at alexeiturchin at gmail.com

Slow shutdown of the simulation

Just got new idea of possible end of the world, that is "Slow simulation shutdown"

TL;DR: Shut down of the simulation may be observed and may be unpleasant and it is especially likely if there are infinitely many simulations. It will look like very strange global catastrophe from the observer point of view.

There is a possibility of something like quantum immortality for many simulation world. Thats is if we have many identical simulations and some of them will shut down, nobody will feel it and it will not have any observable consequences.

Lets name it "simulation immortality" as there is nothing quantum about it. I think that it may be true. But it requires two conditions: many simulations and identity problem solution (is a copy of me in remote part of the universe is me.) I wrote about it here:http://lesswrong.com/lw/n7u/the_map_of_quantum_big_world_immortality/

Simulation immortality precisely neutralise risks of the simulation been shut down. But if we agree with quantum immortality logic, it works even broader, preventing any other x-risk, because in any case one person (observer in his timeline) will survive.


In case of simulation shutdown it works nicely if it will be shutdown instantaneous and uniformly. But if servers will be shut one by one, we will see how stars will disappear, and for some period of time we will find ourselves in strange and unpleasant world. Shut down may take only a millisecond in base universe, but it could take long time in simulated, may be days.

Slow shutdown is especially unpleasant scenario for two reasons connected with "simulation immortality".

- Because of simulation immortality, its chances are rising dramatically: If for example 1000 simulations is shutting down and one of them is shutting slowly, I (my copy) will find my self only in the one that is in slow shut down.

- If I find my self in a slow shutdown, there is infinitely many the same simulations, which are also experience slow shutdown. In means that my slow shutdown will never end from my observer point of view. Most probably after all this adventure I will find my self in the simulation which shutdown was stopped or reversed, or stuck somewhere in the middle.

See simulation map here: http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Global warming prevention plan

TL;DR: Small probability of runaway global warming requires preparation of urgent unconventional measures of its prevention that is sunlight dimming.



Abstract:

The most expected version of limited global warming of several degrees C in 21 century will not result in human extinction, as even the thawing after Ice Age in the past didn’t have such an impact.

The main question of global warming is the possibility of runaway global warming and the conditions in which it could happen. Runaway warming means warming of 30 C or more, which will make the Earth uninhabitable. It is unlikely event but it could result in human extinction.

Global warming could also create some context risks, which will change the probability of other global risks.

I will not go here in all details about nature of global warming and established ideas about its prevention as it has extensive coverage in Wikipedia (https://en.wikipedia.org/wiki/Global_warming and https://en.wikipedia.org/wiki/Climate_change_mitigation).

Instead I will concentrate on heavy tails risks and less conventional methods of global warming prevention.

The map provides summary of all known methods of GW prevention and also of ideas about scale of GW and consequences of each level of warming.

The map also shows how prevention plans depends of current level of technologies. In short, the map has three variables: level of tech, level of urgency in GW prevention and scale of the warming.

The following post consists of text wall and the map, which are complimentary: the text provides in depths details about some ideas and the map gives general overview of the prevention plans.



The map: http://immortality-roadmap.com/warming3.pdf



Uncertainty

The main feature of climate theory is its intrinsic uncertainty. This uncertainty is not about climate change denial; we are almost sure that anthropogenic climate change is real. The uncertainty is about its exact scale and timing, and especially about low probability tails with high consequences. In the case of risk analysis we can’t ignore these tails as they bear the most risk. So I will focus mainly on the tails, but this in turn requires a focus on more marginal, contested or unproved theories.

These uncertainties are especially large if we make projections for 50-100 years from now; they are connected with the complexity of the climate, the unpredictability of future emissions and the chaotic nature of the climate.



Clathrate methane gun

An unconventional but possible global catastrophe accepted by several researchers is a greenhouse catastrophe named the “runaway greenhouse effect”. The idea is well covered in wikipedia https://en.wikipedia.org/wiki/Clathrate_gun_hypothesis

Currently large amounts of methane clathrate are present in the Arctic and since this area is warming quickly than other regions, the gasses could be released into the atmosphere. https://en.wikipedia.org/wiki/Arctic_methane_emissions



Predictions relating to the speed and consequences of this process differ. Mainstream science sees the methane cycle as dangerous but slow process which could result eventually in a 6 C rise in global temperature, which seems bad but it is survivable. It will also take thousands of years.



It has happened once before during Late-Paleocene, known as the Paleocene-Eocene thermal maximum, https://en.wikipedia.org/wiki/Paleocene%E2%80%93Eocene_Thermal_Maximum (PETM), when the temperature jumped by about 6 C, probably because of methane. Methane-driven global warming is just 1 of 10 hypotheses explaining PETM. But during PETM global methane clathrate deposits were around 10 times smaller than they are at present because the ocean was warmer. This means that if the clathrate gun fires again it could result in much more severe consequences.



But some scientists think that it may happen quickly and with stronger effects, which would result in runaway global warming, because of several positive feedback loops. See, for example the blog http://arctic-news.blogspot.ru/



There are several possible positive feedback loops which could make methane-driven warming stronger:

1) The Sun is now brighter than before because of stellar evolution. The increase in the Sun’s luminosity will eventually result in runaway global warming in a period 100 million to 1 billion years from now. The Sun will become thousand of times more luminous when it becomes a red giant. See more here: https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans

2) After a long period of a cold climate (ice ages), a large amount of methane clathrate accumulated in the Arctic.

3) Methane is short living atmospheric gas (seven years). So the same amount of methane would result in much more intense warming if it is released quickly, compared with a scenario in which it is scattered over centuries. The speed of methane release depends on the speed global warming. Anthropogenic CO2 increases very quickly and could be followed by a quick release of the methane.

4) Water vapor is the strongest green house gas and more warming results in more water vapor in the atmosphere.

5) Coal burning resulted in large global dimming https://en.wikipedia.org/wiki/Global_dimming And the current switch to cleaner technologies could stop the masking of the global warming.

6) The ocean’s ability to solve CO2 falls with a rise in temperature.

7) The Arctic has the biggest temperature increase due to global warming, with a projected growth of 5-10 C, and as result it will lose its ice shield and that would reduce the Earth’s albedo which would result in higher temperatures. The same is true for permafrost and snow cover.

8) Warmer Siberian rivers bring their water into the Arctic ocean.

9) The Gulfstream will bring warmer water from the Mexican Gulf to the Arctic ocean.

10) The current period of a calm, spotless Sun would end and result in further warming.



Anthropic bias

One unconventional reason for global warming to be more dangerous than we used to think is anthropic bias.

1. We tend to think that we are safe because not runaway global warming events have ever happened in the past. But we could observe only a planet where this never happened. Milan Cirncovich and Bostrom wrote about it. So the real rate of runaway warming could be much higher. See here: http://www.nickbostrom.com/papers/anthropicshadow.pdf

2. Also we, humans tend to find ourselves in a period when climate changes are very strong because of climate instability. This is because human intelligence as a universal adaptation mechanism was more effective in the period of instability. So climate instability helps to breed intelligent beings. (This is my idea and may need additional proof).

3. But if runaway global warming is long overdue this would mean that our environment is more sensitive even to smaller human actions (compare it with an over-pressured balloon and small needle). In this case the amount of CO2 we currently release could be such an action. So we could underestimate the fragility of our environment because of anthropic bias. (This is my idea and I wrote about here: http://www.slideshare.net/avturchin/why-anthropic-principle-stopped-to-defend-us-observation-selection-and-fragility-of-our-environment)



The timeline of possible runaway global warming

We could name the runaway global warming a Venusian scenario because thanks to a greenhouse effect on the surface of Venus its temperature is over 400 C, despite that, owing to a high albedo (0.75, caused by white clouds) it receives less solar energy than the Earth (albedo 0.3).



A greenhouse catastrophe can consist of three stages:

1. Warming of 1-2 degrees due to anthropogenic C02 in the atmosphere, passage of «a trigger point». We don’t where the tipping point is, we may have passed it already, conversely we may be underestimating natural self-regulating mechanisms.

2. Warming of 10-20 degrees because of methane from gas hydrates and the Siberian bogs as well as the release of CO2 currently dissolved in the oceans. The speed of this self-amplifying process is limited by the thermal inertia of the ocean, so it will probably take about 10-100 years. This process can be arrested only by sharp hi-tech interventions, like an artificial nuclear winter and-or eruptions of multiple volcanoes. But the more warming occurs, the lesser the ability of civilization to stop it becomes, as its technologies will be damaged. But the later that global warming happens, the higher the tech will be that can be used to stop it.

3.Moist greenhouse. Steam is a major contributor to a greenhouse effect, which results in an even stronger and quicker positive feedback loop. A moist greenhouse will start if the average temperature of the earth is 47 C (currently 15 C) and it will result in a runaway evaporation of the oceans, resulting in 900 C surface temperatures. (https://en.wikipedia.org/wiki/Future_of_the_Earth#Loss_of_oceans ). All the water on the planet will boil, resulting in a dense water vapor atmosphere. See also here: https://en.wikipedia.org/wiki/Runaway_greenhouse_effect



Prevention



If we survive until positive Singularity, global warming will be not an issue. But if strong AI and other super techs don’t arrive until the end of the 21st century, we wll need to invest a lot in its prevention, as the civilization could collapse before the creation of strong AI, which means that we will never be able to use all of its benefits.



I have a map, which summarizes the known ideas for global warming prevention and adds some new ones for urgent risk management. http://immortality-roadmap.com/warming2.pdf







The map has two main variables: our level of tech progress and size of the warming which we want to prevent. But its main variable is the ability of humanity to unite and act proactively. In short, the plans are:

No plan – do nothing, and just adapt to warming

Plan A – cutting emissions and removing greenhouse gases from the atmosphere. Requires a lot of investment and cooperation. Long term action and remote results.

Plan B – geo-engineering aimed at blocking sunlight. Not much investment and unilateral action are possible. Quicker action and quicker results, but involves risks in the case of switching off.

Plan C – emergency actions for Sun dimming, like artificial volcanic winter.

Plan D – moving to other planets.

All plans could be executed using current tech levels and also at a high tech level through the use of nanotech and so on.



I think that climate change demands that we go directly to plan B. Plan A is cutting emissions, and it’s not working, because it is very expensive and requires cooperation from all sides. Even then it will not achieve immediate results and the temperature will still continue to rise for many other reasons.



Plan B is changing the opacity of the Earth’s atmosphere. It could be a surprisingly low cost exercise and could be operated locally made. There are suggestions to release something as simple as sulfuric acid into the upper atmosphere to raise its reflection abilities.



"According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft." https://www.technologyreview.com/s/511016/a-cheap-and-easy-plan-to-stop-global-warming/



There are also ideas to recapture CO2 using genetically modified organisms, iron seeding in the oceans and by dispersing the carbon capturing mineral olivine.



The problem with that approach is that it can't be stopped. As Seth Baum wrote, a smaller catastrophe could result in the disruption of such engineering and the consequent immediate return of global warming with a vengeance. http://sethbaum.com/ac/2013_DoubleCatastrophe.html



There are other ways pf preventing global warming. Plan C is creating an artificial nuclear winter through a volcanic explosion or by starting large scale forest fires with nukes. This idea is even more controversial and untested than geo-engineering.



A regional nuclear war I capable of putting 5 mln tons of black carbon into the upper athmosphere, “average global temperatures would drop by 2.25 degrees F (1.25 degrees C) for two to three years afterward, the models suggest.”

http://news.nationalgeographic.com/news/2011/02/110223-nuclear-war-winter-global-warming-environment-science-climate-change/ Nuclear explosions in deep forests may have the same effect as attacks on cities in term of soot production.



Fighting between Plan A and Plan B



So we are not even close to being doomed by global warming but we may have to change the way we react to it.

While cutting emissions is important it will probably not work within a 10-20 year period, quicker acting measures should be devised.



The main risk is abrupt runaway global warming. It is low probability event with the highest consequences. To fight it we should prepare rapid response measures.



Such preparation should be done in advance, which requires expensive scientific experiments. The main problem here is (as always) funding, and regulators’ approval. The impact of sulfur aerosols should be tested. Complicated math models should be evaluated.



Contra-arguments are the following: “Openly embracing climate engineering would probably also cause emissions to soar, as people would think that there's no need to even try to lower emissions any more. So, if for some reason the delivery of that sulfuric acid into the atmosphere or whatever was disrupted, we'd be in trouble. And do we know enough of such measures to say that they are safe? Of course, if we believe that history will end anyways within decades or centuries because of singularity, long-term effects of such measures may not matter so much… Another big issue with changing insolation is that it doesn't solve ocean acidification. No state actor should be allowed to start geo-engineering until they at least take simple measures to reduce their emissions.” (comments from Lesswrong discussion about GW).



Currently it all looks like a political fight between Plan A (cutting emissions) and Plan B (geo-engineering), where plan A’s approval is winning. It has been suggested not to implement Plan B as an increase in the warming would demonstrate a real need to implement Plan A (cutting emissions). Regulators didn’t approve even the smallest experiments with sulfur shielding in Britain. Iron ocean seeding also has regulatory problems.

But the same logic works in the opposite direction. China and the coal companies will not cut emissions, because they want to press policymakers to implement plan B. It looks like a prisoner’s dilemma of two plans.



The difference between the two plans is that plan A will return everything to its natural state and plan B is aimed on creating instruments to regulate the planet’s climate and weather.



In the current global political situation, cutting emissions is difficult to implement because it requires collaboration between many rival companies and countries. If several of them defect (most likely China, Russia and India, who have heavy use of coal and other fossil fuels), it will not work, even if all of Europe were solar powered.



Transition to zero-emission economy could happen naturally in 20 years after electric transportation will become widespread as well as solar energy.



Plan C should be implemented if the situation suddenly changes for the worse, with the temperature jumping 3-5 C in one year. In this case the only option we have is to bomb Pinatubo volcano to make it erupt again, or probably even several volcanos. A volcanic winter will give us time to adopt other geo-engineering measures.



I would also advocate for a mixture of both plans, because they work on different timescale. Cutting emissions and removing CO2 using the current level of technologies would take decades to have an impact on the climate. But geo-engineering has a reaction time of around one year so we could use it to cover the bumps in the road.



Especially important is the fact that if we completely stop emissions, we could also stop global dimming from coal burning which would result in a 3 C global temperature jump. So stopping emissions may result in a temperature jump, and we need a protection system in this case.



In all cases we need to survive until stronger technologies develop. Using nanotech or genetic engineering we could solve the warming problem with less effort. But we have to survive until this time.



It seems to me that the idea of cutting emissions is overhyped and solar management is "underhyped" in terms of public opinion and funding. By changing that misbalance we could achieve more common good.



An unpredictable climate needs a quicker regulation system

The management of climate risks depends on their predictability and it seems that this is not very high. The climate is a very complex and chaotic system.



It may react unexpectedly in response to our own actions. This means that long-term actions are less favorable. The situation could change many times during their implementation.



The quick actions like solar shielding are better for management of poor predictable processes, as we can see the results of our actions and quickly cancel them or make them stronger if we don't like the results.



Context risks – influencing the probability of other global risks



Global warming has some context risks: it could slow tech progress, it could raise the chances of war (probably already happened in Syria because of draught http://futureoflife.org/2016/07/22/climate-change-is-the-most-urgent-existential-risk/), it could exacerbate conflicts between states about how to share recourses (food, water etc.) and about the responsibility for risk mitigation. All such context risks could lead to a larger global catastrophe.



Another context risk is that global warming is captures almost all the available public attention for global risks mitigation, and other more urgent risks may get less attention.



Many people think that runaway global warming constitutes the main risk of global catastrophe. Another group think it is AI, and there is no dialog between these two groups.



The level of warming which is survivable strongly depends of our tech level. Some combinations of temperature and moisture are non-survivable for human beings without air conditioning: If the temperature rises by 15 C, half of the population will be in a non-survivable environment

http://www.sciencedaily.com/releases/2010/05/100504155413.htm because very humid and hot air prevents cooling by perspiration and feels like a much higher temperature. With the current level of tech we could fight it, but if humanity falls to a medieval level, it would be much more difficult to recover in such conditions.



Rising CO2 levels could also impair human intelligence and slow tech progress as CO2 levels near 1000 ppm are known to have negative effects on cognition.



Warming may also result in large hurricanes. They can appear if the sea temperature reaches 50 C and they have a wind speed of 800 km/h, which is enough to destroy any known human structure. They will be also very stable and live very long, thus influencing the atmosphere and creating strong winds all over the world. The highest sea temperature currently is around 30C.

http://en.wikipedia.org/wiki/Hypercane



In fact we should compare not the magnitude but speed of global warming with the speed of tech progress. If the warming is quicker it wins. If we have very slow warming, but even slower progress, the warming still wins. In general I think that progress will overrun warming, and we will create strong AI before we have to deal with serious global warming consequences.



Different predictions



Multiple people predict extinction due to global warming but they are mostly labeled as “alarmists” and are ignored. Some notable predictions:

1. David Auerbach predicts that in 2100 warming will be 5 C and combined with resource depletion and overcrowding it will result in global catastrophe. http://www.dailymail.co.uk/sciencetech/article-3131160/Will-child-witness-end-humanity-Mankind-extinct-100-years-climate-change-warns-expert.html



2. Sam Carana predicts that warming will be 10 C in the 10 years following 2016, and extinction will happen in 2030. http://arctic-news.blogspot.ru/2016/03/ten-degrees-warmer-in-a-decade.html



3. Conventional predictions of the IPCC give a maximum warming of 6.4 C at 2100 in worst case emission scenario and worst climate sensitivity to them: https://en.wikipedia.org/wiki/Effects_of_global_warming#SRES_emissions_scenarios



4. The consensus of scientists is that climate tipping point will be in 2200 http://www.independent.co.uk/news/science/scientists-expect-climate-tipping-point-by-2200-2012967.html



5. If humanity continues to burn all known carbon sources it will result in a 10 C warming by 2030. https://www.newscientist.com/article/mg21228392-300-hyperwarming-climate-could-turn-earths-poles-green/ The only scenario in which we are still burning fossil fuels by 2300 (but not extinct and not a solar powered supercivilzation running nanotech and AI) is a series of nuclear wars or other smaller catastrophes which will permit the existence of regional powers which often smash each other into ruins and then rebuild using coal energy. Something like global nuclear “Somali world”.



We should give more weight to less mainstream predictions, because they describe heavy tails of possible outcomes. I think that it will be reasonable to estimate the risks of extinction level runaway global warming in the next 100-300 years at 1 per cent and act as it is the main risk from global warming.

Identity map

“Identity” here refers to the question “will my copy be me, and if yes, on which conditions?” It results in several paradoxes which I will not repeat here, hoping that they are known to the reader.

Identity is one of the most complex problems, like safe AI or aging. It only appears be simple. It is complex because it has to answer the question: “Who is who?” in the universe, that is to create a trajectory in the space of all possible minds, connecting identical or continuous observer-moments. But such a trajectory would be of the same complexity as all space of possible minds, and that is very complex.

There have been several attempts to dismiss the complexity of the identity problem, like open individualism (I am everybody) or zero-individualism (I exist only now). But they do not prevent the existence of “practical identity” which I use when planning my tomorrow or when I am afraid of future pain.

The identity problem is also very important. If we (or AI) arrive at an incorrect solution, we will end up being replaced by p-zombies or just copies-which-are-not-me during a “great uploading”. It will be a very subtle end of the world.

The identity problem is also equivalent to the immortality problem. if I am able to describe “what is me”, I would know what I need to save forever. This has practical importance now, as I am collecting data for my digital immortality (I even created a startup about it and the map will be my main contribution to it. If I solve the identity problem I will be able to sell the solution as a service http://motherboard.vice.com/read/this-transhumanist-records-everything-around-him-so-his-mind-will-live-forever)

So we need to know how much and what kind of information I should preserve in order to be resurrected by future AI. What information is enough to create a copy of me? And is information enough at all?

Moreover, the identity problem (IP) may be equivalent to the benevolent AI problem, because the first problem is, in a nutshell, “What is me” and the second is “What is good for me”. Regardless, the IP requires a solution of consciousness problem, and AI problem (that is solving the nature of intelligence) are somewhat similar topics.

I wrote 100+ pages trying to solve the IP, and became lost in the ocean of ideas. So I decided to use something like the AIXI method of problem solving: I will list all possible solutions, even the most crazy ones, and then assess them.

The following map is connected with several other maps: the map of p-zombies, the plan of future research into the identity problem, and the map of copies. http://lesswrong.com/lw/nsz/the_map_of_pzombies/

The map is based on idea that each definition of identity is also a definition of Self, and it is also strongly connected with one philosophical world view (for example, dualism). Each definition of identity answers a question “what is identical to what”. Each definition also provides its own answers to the copy problem as well as to its own definition of death - which is just the end of identity – and also presents its own idea of how to reach immortality.



So on the horizontal axis we have classes of solutions:

“Self" definition - corresponding identity definition - philosophical reality theory - criteria and question of identity - death and immortality definitions.



On the vertical axis are presented various theories of Self and identity from the most popular on the upper level to the less popular described below:

1) The group of theories which claim that a copy is not original, because some kind of non informational identity substrate exists. Different substrates: same atoms, qualia, soul or - most popular - continuity of consciousness. All of them require that the physicalism will be false. But some instruments for preserving identity could be built. For example we could preserve the same atoms or preserve the continuity of consciousness of some process like the fire of a candle. But no valid arguments exist for any of these theories. In Parfit’s terms it is a numerical identity (being the same person). It answers the question “What I will experience in the next moment of time"

2) The group of theories which claim that a copy is original, if it is informationally the same. This is the main question about the required amount of information for the identity. Some theories obviously require too much information, like the positions of all atoms in the body to be the same, and other theories obviously do not require enough information, like the DNA and the name.

3) The group of theories which see identity as a social phenomenon. My identity is defined by my location and by the ability of others to recognise me as me.

4) The group of theories which connect my identity with my ability to make plans for future actions. Identity is a meaningful is part of a decision theory.

5) Indirect definitions of self. This a group of theories which define something with which self is strongly connected, but which is not self. It is a biological brain, space-time continuity, atoms, cells or complexity. In this situation we say that we don’t know what constitutes identity but we could know with what it is directly connected and could preserve it.

6) Identity as a sum of all its attributes, including name, documents, and recognition by other people. It is close to Leibniz’s definition of identity. Basically, it is a duck test: if it looks like a duck, swims like a duck, and quacks like a duck, then it is probably a duck.

7) Human identity is something very different to identity of other things or possible minds, as humans have evolved to have an idea of identity, self-image, the ability to distinguish their own identity and the identity of others, and to predict its identity. So it is a complex adaptation which consists of many parts, and even if some parts are missed, they could be restored using other parts.

There also a problem of legal identity and responsibility.

8) Self-determination. “Self” controls identity, creating its own criteria of identity and declaring its nature. The main idea here is that the conscious mind can redefine its identity in the most useful way. It also includes the idea that self and identity evolve during differing stages of personal human evolution.

9) Identity is meaningless. The popularity of this subset of ideas is growing. Zero-identity and open identity both belong to this subset. The main contra-argument here is that if we cut the idea of identity, future planning will be impossible and we will have to return to some kind of identity through the back door. The idea of identity comes also with the idea of the values of individuality. If we are replaceable like ants in an anthill, there are no identity problems. There is also no problem with murder.



The following is a series of even less popular theories of identity, some of them I just constructed ad hoc.

10) Self is a subset of all thinking beings. We could see a space of all possible minds as divided into subsets, and call them separate personalities.

11) Non-binary definitions of identity.

The idea that me or not-me identity solutions are too simple and result in all logical problems. if we define identity continuously, as a digit of the interval (0,1), we will get rid of some paradoxes and thus be able to calculate the identity level of similarity or time until the given next stage could be used as such a measure. Even a complex digit can be used if we include informational and continuous identity (in a Parfit meaning).

12) Negative definitions of identity: we could try to say what is not me.

13) Identity as overlapping observer-moments.

14) Identity as a field of indexical uncertainty, that is a group of observers to which I belong, but can’t know which one I am.

15) Conservative approach to identity. As we don’t know what identity is we should try to save as much as possible, and risk our identity only if it is the only means of survival. That means no copy/paste transportation to Mars for pleasure, but yes if it is the only chance to survive (this is my own position).

16) Identity as individuality, i.e. uniqueness. If individuality doesn’t exist or doesn’t have any value, identity is not important.

17) Identity as a result of the ability to distinguish different people. Identity here is a property of perception.

18) Mathematical identity. Identity may be presented as a number sequence, where each number describes a full state of mind. Useful toy model.

19) Infinite identity. The main idea here is that any mind has the non-zero probability of becoming any other mind after a series of transformations. So only one identity exists in all the space of all possible minds, but the expected time for me to become a given person is dramatically different in the case of future me (1 day) and a random person (10 to the power of 100 years). This theory also needs a special version of quantum immortality which resets “memories” of a dying being to zero, resulting in something like reincarnation, or an infinitely repeating universe in the style of Nietzsche's eternal recurrence.

20) Identity in a multilevel simulation. As we probably live in a simulation, there is a chance that it is multiplayer game in which one gamer has several avatars and can constantly have experiences through all of them. It is like one eye through several people.

21) Splitting identity. This is an idea that future identity could split into several (or infinitely many) streams. If we live in a quantum multiverse we split every second without any (perceived) problems. We are also adapted to have several future copies if we think about “me-tomorrow” and “me-the-day-after-tomorrow”.



This list shows only groups of identity definitions, many more smaller ideas are included in the map.

The only rational choice I see is a conservative approach, acknowledging that we don’t know the nature of identity and trying to save as much as possible of each situation in order to preserve identity.

The pdf: http://immortality-roadmap.com/identityeng8.pdf

The map of p-zombies

No real p-zombies exist in any probable way, but a lot of ideas about them have been suggested. This map is the map of ideas. It may be fun or may be useful.

The most useful application of p-zombies research is to determine whether we could loose something important during uploading.

We have to solve the problem of consciousness before we will be uploaded. It will be the most stupid end of the world: everybody is alive and happy but everybody is p-zombie.

Most ideas here are from Stanford Encyclopedia of Philosophy, Lesswrong wiki, Rational wiki, recent post of EY and from works of Chalmers and Dennett. Some ideas are mine.

The pdf is here.
http://immortality-roadmap.com/zombiemap3.pdf

Hypotetical superearthquakes as x-risks

Super-Earthquake

We could name super-earthquake a hypothetical large scale quake leading to full destructions of human built structures on the all surface of the Earth. No such quakes happened in human history and the only scientifically solid scenario for them seems to be large asteroid impact.
Such event could not result in human extinction itself, as there would be ships, planes, and people on the countryside. But it unequivocally would destroy all technological civilization. To do so it should have intensity around 10 in Mercally scale http://earthquake.usgs.gov/learn/topics/mercalli.php


It would be interesting to assess probability of worldwide earthquakes, which could destroy everything on the earth surface. Plate tectonics as we know it can't produce them. But distribution of largest earthquakes could have long and heavy tail which may include worldwide quakes.

So, how it could happen?

1) Asteroid impact surely could result in worldwide earthquake. I think that 1 mile asteroid is enough to create worldwide earthquake.

2) Change of buoyancy of large land mass may result whole continent uplifting may be in miles. (This is just my conjecture, not proved scientific fact, so the possibility of it needs further assessment.) Smaller scale event of this type happened in 1957 during Gobi-Altay earthquake when the whole mountain ridge moved. https://en.wikipedia.org/wiki/1957_Mongolia_earthquake


3) Unknown processes in mantle sometimes results in large deep earthquakes https://en.wikipedia.org/wiki/2013_Okhotsk_Sea_earthquake

4) Very hypothetical changes in Earth core also may result in worldwide earthquakes. If core somehow collapse, because of change of crystal structure of iron in it or because of possible explosion of (hypothetical) natural uranium nuclear reactor in it. Passing through clouds of dark matter may result in activation of Earth core as it could be heated by annihilation of dark matter particles as was suggested in one recent research. http://www.sciencemag.org/news/2015/02/did-dark-matter-kill-dinosaurs

Such warming of the earth core will result in its expansion and may trigger large deep quakes.

5) Superbomb explosion. Blockbusters bombs in WW2 was used to create miniquakes as its main killing effect, and they explode after they penetrate ground. Large nukes may be used the same way, but super earthquake requires energy which is beyond current power of nukes on several orders of magnitude. Many superbombs may be needed to create superquake.
6) The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic rifts expand not gradually but in large jumps. This middle oceanic rifts creates new oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is large “steps” in ocean floor in the zone of oceanic rifts. Boiling of water trapped into the rift and contacted with magma may also contribute to explosive zip style rapture of the rifts. But this idea may be from fridge science catastrophism so should be taken with caution.
7) Supervolcano explosions. Largescale eruptions like the Kimberlitic tube explosion would also produce earthquake which will be felt on all earthquake, but not uniformly. But they must be much stronger than Krakatoa explosion in 1883. Large explosions of natural explosives at the depth of 100 km like https://en.wikipedia.org/wiki/Trinitrotoluene were suggested as a possible mechanism of Kimberlitic explosions.

Superquakes effects:
1.Superquake surely will come with megatsunami which will result in most damage. Supertsunami may be miles high in some areas and scenarios. Tsunamis may have different ethology, for example resonance may play a role or change of speed of rotation of the Earth.

2.Ground liquefaction https://en.wikipedia.org/wiki/Soil_liquefaction may result in “ground waves”, that is some kind of surface waves on some kind of soils (this is my idea, which should be more researched).
3. Supersonic impact waves and high frequency vibration. Superquake could come with unusual patterns of wavering, which are typically dissipate in soil or not appear. It could be killing sound – more than 160 db, or shock supersonic waves which reflect from the surface, but result in destruction of solid surface by spalling the same way as antitank munitions do it https://en.wikipedia.org/wiki/High-explosive_squash_head
4. Other volcanic events and gases release. Methane deposits in Arctic would be destabilized strong greenhouse methane will erupt on surface. Carbon dioxide will be released from oceans as a result of shaking (the same way as shaking of soda can result in bubbles). Other gases including sulfur and CO2 will be realized by volcanos.
5. Most dams will fall resulting in flooding
6. Nuclear facilities will meltdown. See Seth Baum discussion http://futureoflife.org/2016/07/25/earthquake-existential-risk/#comment-4143
7. Biological weapons will be released from facilities
8. Nuclear warning system will be triggered.
9. All roads and buildings will be destroyed.
10. Large fires will happen.
11. As natural ability of the earth to dissipate the seismic waves will be saturated, the waves will reflect inside the earth several times resulting in very long and repapering quake.
12. The waves (from the surface location event) will focus on the opposite side of the Earth, as it may be happened after Chicxulub asteroid impact which coincide with Deccan traps on opposite side of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from tectonic tensions and mountain collapses.

Large, non-global earthquakes also could become precursors for global catastrophes in several occasions. The following podcast by Seth Baum is devoted to this possibility. http://futureoflife.org/2016/07/25/earthquake-existential-risk/#comment-4147

1) Destruction of biological facilities like CDC which has smallpox samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
Tags:

The state of AI safety problem in 2016

I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?

1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.

Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.

The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.

There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.

The doubling time in some benchmarks in deep learning seems to be 1 year.

Media overhype AI achievements.

Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).

A lot of new investment going into AI research and salaries in field are rising.

Military are increasingly interested in implementing AI in warfare.

Google has AI ethic board, but what it is doing is unclear.

It seems like AI safety and implementation is lagging from actual AI development.

Update: the DAO debacle seems to be clear reminder of vulnerability of supose to be safe systems

Estimation of timing of AI risk

TL;DR: Computer power is 1 of 3 arguments that we should take prior probability of AI as "anywhere in 21 century". After I use 4 updates to shift it even earlier, that is precautionary principle, recent neural net success etc.

I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.

At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:

Part I 1. Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.

Moore's law argument Moore's law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore's laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 -1000 times from the moment of Moore law ending.

But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel "Summit" and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.

There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.

All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.

So, the computer on which AI may run will be built in the 21 century.

"3". Uploading argument The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.

Conclusion from prior: Flat probability distribution.

If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.

Part II Updates of the prior probability.

Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.

Arguments for rising AI risks probability in near future:

Simpler AI for risk We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.

Precautionary principle We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.

Recent success of neural nets We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.

AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.

All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.

It also means that AI probability is distributed exponentially, from now and until it creation.

Contrarguments The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.

Estimating frequency of the new ideas in AI design In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
Conclusion. We should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.