The map of p-zombies

No real p-zombies exist in any probable way, but a lot of ideas about them have been suggested. This map is the map of ideas. It may be fun or may be useful.

The most useful application of p-zombies research is to determine whether we could loose something important during uploading.

We have to solve the problem of consciousness before we will be uploaded. It will be the most stupid end of the world: everybody is alive and happy but everybody is p-zombie.

Most ideas here are from Stanford Encyclopedia of Philosophy, Lesswrong wiki, Rational wiki, recent post of EY and from works of Chalmers and Dennett. Some ideas are mine.

The pdf is here.
http://immortality-roadmap.com/zombiemap3.pdf

Hypotetical superearthquakes as x-risks

Super-Earthquake

We could name super-earthquake a hypothetical large scale quake leading to full destructions of human built structures on the all surface of the Earth. No such quakes happened in human history and the only scientifically solid scenario for them seems to be large asteroid impact.
Such event could not result in human extinction itself, as there would be ships, planes, and people on the countryside. But it unequivocally would destroy all technological civilization. To do so it should have intensity around 10 in Mercally scale http://earthquake.usgs.gov/learn/topics/mercalli.php


It would be interesting to assess probability of worldwide earthquakes, which could destroy everything on the earth surface. Plate tectonics as we know it can't produce them. But distribution of largest earthquakes could have long and heavy tail which may include worldwide quakes.

So, how it could happen?

1) Asteroid impact surely could result in worldwide earthquake. I think that 1 mile asteroid is enough to create worldwide earthquake.

2) Change of buoyancy of large land mass may result whole continent uplifting may be in miles. (This is just my conjecture, not proved scientific fact, so the possibility of it needs further assessment.) Smaller scale event of this type happened in 1957 during Gobi-Altay earthquake when the whole mountain ridge moved. https://en.wikipedia.org/wiki/1957_Mongolia_earthquake


3) Unknown processes in mantle sometimes results in large deep earthquakes https://en.wikipedia.org/wiki/2013_Okhotsk_Sea_earthquake

4) Very hypothetical changes in Earth core also may result in worldwide earthquakes. If core somehow collapse, because of change of crystal structure of iron in it or because of possible explosion of (hypothetical) natural uranium nuclear reactor in it. Passing through clouds of dark matter may result in activation of Earth core as it could be heated by annihilation of dark matter particles as was suggested in one recent research. http://www.sciencemag.org/news/2015/02/did-dark-matter-kill-dinosaurs

Such warming of the earth core will result in its expansion and may trigger large deep quakes.

5) Superbomb explosion. Blockbusters bombs in WW2 was used to create miniquakes as its main killing effect, and they explode after they penetrate ground. Large nukes may be used the same way, but super earthquake requires energy which is beyond current power of nukes on several orders of magnitude. Many superbombs may be needed to create superquake.
6) The Earth cracks in the area of oceanic rifts. I read about suggestions that oceanic rifts expand not gradually but in large jumps. This middle oceanic rifts creates new oceanic floor. https://en.wikipedia.org/wiki/Mid-Atlantic_Ridge The evidence for it is large “steps” in ocean floor in the zone of oceanic rifts. Boiling of water trapped into the rift and contacted with magma may also contribute to explosive zip style rapture of the rifts. But this idea may be from fridge science catastrophism so should be taken with caution.
7) Supervolcano explosions. Largescale eruptions like the Kimberlitic tube explosion would also produce earthquake which will be felt on all earthquake, but not uniformly. But they must be much stronger than Krakatoa explosion in 1883. Large explosions of natural explosives at the depth of 100 km like https://en.wikipedia.org/wiki/Trinitrotoluene were suggested as a possible mechanism of Kimberlitic explosions.

Superquakes effects:
1.Superquake surely will come with megatsunami which will result in most damage. Supertsunami may be miles high in some areas and scenarios. Tsunamis may have different ethology, for example resonance may play a role or change of speed of rotation of the Earth.

2.Ground liquefaction https://en.wikipedia.org/wiki/Soil_liquefaction may result in “ground waves”, that is some kind of surface waves on some kind of soils (this is my idea, which should be more researched).
3. Supersonic impact waves and high frequency vibration. Superquake could come with unusual patterns of wavering, which are typically dissipate in soil or not appear. It could be killing sound – more than 160 db, or shock supersonic waves which reflect from the surface, but result in destruction of solid surface by spalling the same way as antitank munitions do it https://en.wikipedia.org/wiki/High-explosive_squash_head
4. Other volcanic events and gases release. Methane deposits in Arctic would be destabilized strong greenhouse methane will erupt on surface. Carbon dioxide will be released from oceans as a result of shaking (the same way as shaking of soda can result in bubbles). Other gases including sulfur and CO2 will be realized by volcanos.
5. Most dams will fall resulting in flooding
6. Nuclear facilities will meltdown. See Seth Baum discussion http://futureoflife.org/2016/07/25/earthquake-existential-risk/#comment-4143
7. Biological weapons will be released from facilities
8. Nuclear warning system will be triggered.
9. All roads and buildings will be destroyed.
10. Large fires will happen.
11. As natural ability of the earth to dissipate the seismic waves will be saturated, the waves will reflect inside the earth several times resulting in very long and repapering quake.
12. The waves (from the surface location event) will focus on the opposite side of the Earth, as it may be happened after Chicxulub asteroid impact which coincide with Deccan traps on opposite side of the Earth and result in comparable destruction where.
13. Large displacement of mass may result into small change of the speed of rotation of the Earth, which would contribute to tsunamis.
14. Secondary quakes will follow, as energy will be realized from tectonic tensions and mountain collapses.

Large, non-global earthquakes also could become precursors for global catastrophes in several occasions. The following podcast by Seth Baum is devoted to this possibility. http://futureoflife.org/2016/07/25/earthquake-existential-risk/#comment-4147

1) Destruction of biological facilities like CDC which has smallpox samples or other viruses
2) Nuclear meltdowns
3) Economical crisis or showering of tech. progress in case of large EQ in San Francisco or other important area.
4) Starting of nuclear war.
5) X-risks prevention groups are disproportionally concentrated in San Francisco and around London. They are more concentrated than possible sources of risks. So in event of devastating EQ in SF our ability to prevent x-risks may be greatly reduced.
Tags:

The state of AI safety problem in 2016

I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?

1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.

Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.

The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.

There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.

The doubling time in some benchmarks in deep learning seems to be 1 year.

Media overhype AI achievements.

Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).

A lot of new investment going into AI research and salaries in field are rising.

Military are increasingly interested in implementing AI in warfare.

Google has AI ethic board, but what it is doing is unclear.

It seems like AI safety and implementation is lagging from actual AI development.

Update: the DAO debacle seems to be clear reminder of vulnerability of supose to be safe systems

Estimation of timing of AI risk

TL;DR: Computer power is 1 of 3 arguments that we should take prior probability of AI as "anywhere in 21 century". After I use 4 updates to shift it even earlier, that is precautionary principle, recent neural net success etc.

I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.

At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:

Part I 1. Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.

Moore's law argument Moore's law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore's laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 -1000 times from the moment of Moore law ending.

But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel "Summit" and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.

There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.

All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.

So, the computer on which AI may run will be built in the 21 century.

"3". Uploading argument The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.

Conclusion from prior: Flat probability distribution.

If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.

Part II Updates of the prior probability.

Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.

Arguments for rising AI risks probability in near future:

Simpler AI for risk We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.

Precautionary principle We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.

Recent success of neural nets We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.

AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.

All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.

It also means that AI probability is distributed exponentially, from now and until it creation.

Contrarguments The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.

Estimating frequency of the new ideas in AI design In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
Conclusion. We should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.