Now Reading
Research Priorities for Artificial Intelligence – Open Letter

Research Priorities for Artificial Intelligence – Open Letter

by Giulio PriscoJanuary 12, 2015

The Future of Life Institute, a research and outreach organization working to mitigate existential risks facing humanity, currently focusing on potential risks from the development of human-level artificial intelligence, has issued an open letter to the Artificial Intelligence (AI) community.

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. [W]e recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. [W]e believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

An attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

Fear of Smarter-than-Human AI

Robot womanThe open letter has been signed by a who-s-who list of experts in advanced AI. Notably, the list of signatories include renowned cosmologist Stephen Hawking, Tesla Motors and SpaceX founder Elon Musk, and Oxford philosopher Nick Bostrom, author of the recently published book “Superintelligence: Paths, Dangers, Strategies” (Here is my full review of the book). After the publication of Bostrom’s book, Hawking and Musk expressed fears about the future of AI. In August, Elon Musk tweeted that artificial intelligence could be more dangerous than nuclear weapons and in October, likened it to “summoning a demon.” Stephen Hawking told the BBC in December that AI could “spell the end of the human race.”

Also read: Artificial Intelligence – A Military Roadmap

Bostrom defines superintelligence as something far smarter than us, not in the provincial sense that Einstein is smarter than the village idiot, but in the real sense that Einstein (or the village idiot – the difference is utterly irrelevant on this scale) is smarter than a beetle. Bostrom thinks that the first human-equivalent AI able to learn and improve itself could be developed sometime in this century, with the possibility of a very fast transition to superintelligence soon thereafter. A superintelligent and hostile AI could, as Musk and Hawking fear, eliminate humanity.

The signature of Musk and Hawking may be interpreted as expressing confidence that adopting the recommendations in the open letter would diminish the risks of uncontrolled development of AI. You can sign the letter here of you want. I didn’t sign it (yet), because I think that important progress in AI, including the development of smarter-than-human AI and superintelligence, can only emerge from free, spontaneous and unconstrained research. I don’t disagree with the open letter or the research priorities document, but setting common priorities is not the aspect of AI research that I find more interesting at this moment – I prefer to let a thousand flowers bloom.

The mission of he Future of Life Institute is to catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges. The Institute received seed funding from Skype co-founder Jaan Tallinn and Matt Wage.

Images from Shutterstock.

Advertised sites are not endorsed by us. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
What's your reaction?
Love it
Hate it