avturchin (avturchin) wrote,
avturchin
avturchin

Estimation of timing of AI risk

TL;DR: Computer power is 1 of 3 arguments that we should take prior probability of AI as "anywhere in 21 century". After I use 4 updates to shift it even earlier, that is precautionary principle, recent neural net success etc.

I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.

At first, I will try to prove the following prior probability of AI: "If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles". Arguments for this prior probability:

Part I 1. Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.

Moore's law argument Moore's law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore's laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 -1000 times from the moment of Moore law ending.

But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel "Summit" and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.

There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.

All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.

So, the computer on which AI may run will be built in the 21 century.

"3". Uploading argument The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.

Conclusion from prior: Flat probability distribution.

If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.

Part II Updates of the prior probability.

Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.

Arguments for rising AI risks probability in near future:

Simpler AI for risk We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.

Precautionary principle We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.

Recent success of neural nets We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.

AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.

All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.

It also means that AI probability is distributed exponentially, from now and until it creation.

Contrarguments The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.

Estimating frequency of the new ideas in AI design In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
Conclusion. We should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments