avturchin (avturchin) wrote,
avturchin
avturchin

The state of AI safety problem in 2016

I am trying to outline main trends in AI safety this year, may I ask an advise what I should add or remove from the following list?

1.Elon Musk became the main player in AI field with his OpenAI program. But the idea of AI openness now opposed by his mentor Nick Bostrom, who is writing an article which is questioning safety of the idea of openness in the field of AI. http://www.nickbostrom.com/papers/openness.pdf Personally I think that here we see an example of arrogance of billionaire. He intuitively come to idea which looks nice, appealing and may work in some contexts. But to prove that it will actually work, we need rigorous prove.

Google seems to be one of the main AI companies and its AlphaGo won in Go game in human champion. Yudkowsky predicted after 3:0 that AlphaGo has reached superhuman abilities in Go and left humans forever behind, but AlphaGo lost the next game. It made Yudkowsky to said that it poses one more risk of AI – a risk of uneven AI development, that it is sometimes superhuman and sometimes it fails.

The number of technical articles on the field of AI control grew exponentially. And it is not easy to read them all.

There are many impressive achievements in the field of neural nets and deep learning. Deep learning was Cinderella in the field of AI for many years, but now (starting from 2012) it is princess. And it was unexpected from the point of view of AI safety community. MIRI only recently updated its research schedule and added studying of neural net based AI safety.

The doubling time in some benchmarks in deep learning seems to be 1 year.

Media overhype AI achievements.

Many new projects in AI safety had started, but some are concentrated on safety of self-driving cars (Even Russian KAMAZ lorry building company investigate AI ethics).

A lot of new investment going into AI research and salaries in field are rising.

Military are increasingly interested in implementing AI in warfare.

Google has AI ethic board, but what it is doing is unclear.

It seems like AI safety and implementation is lagging from actual AI development.

Update: the DAO debacle seems to be clear reminder of vulnerability of supose to be safe systems
Subscribe
  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments