Artificial Intelligence – A Military Roadmap
A “Defense Science Board 2015 Summer Study on Autonomy” will study autonomous systems (e.g. robots. drones. etc.) that permit integrating autonomy in defense operations.
“The purpose of the study is to identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains. The study will assess opportunities for DoD to enhance mission efficiency, shrink life-cycle costs, and reduce loss of life through the use of autonomy. Emphasis will be given to exploration of the bounds – both technological and social – that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”
The memo mentions IBM’s Watson, the use of robotics and automation in ports and mines, autonomous vehicles (from UAVs to Google’s self-driving car), automated logistics and supply chain management, and many more applications of AI.
A Defense Department official told Defense One that they want a real roadmap for autonomy in military operations.
Hostile Superintelligent AI Could Exterminate Humanity
The report’s findings could refute or confirm current fears about the future of AI. In August, Elon Musk tweeted that artificial intelligence could be more dangerous than nuclear weapons and in October, likened it to “summoning a demon.” Stephen Hawking told the BBC in December that AI could “spell the end of the human race.”
The fears of Musk and Hawking, among others, are prompted by the book “Superintelligence: Paths, Dangers, Strategies” by philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford. Here is my full review of the book.
Bostrom defines superintelligence as something far smarter than us, not in the provincial sense that Einstein is smarter than the village idiot, but in the real sense that Einstein (or the village idiot – the difference is utterly irrelevant on this scale) is smarter than a beetle. Bostrom thinks that the first human-equivalent AI able to learn and improve itself could be developed sometime in this century, with the possibility of a very fast transition to superintelligence soon thereafter. A superintelligent and hostile AI could, as Must and Hawking fear, eliminate humanity.
Now when the AI improves itself, it improves the thing that does the improving. An intelligence explosion results – a rapid cascade of recursive self-improvement cycles causing the AI’s capability to soar… The final phase begins when the AI has gained sufficient strength to obviate the need for secrecy. The AI can now directly implement its objectives on a full scale. The overt implementation phase might start with a ‘strike’ in which the AI eliminates the human species.
What do you think? Are you scared of AI? Do you agree with Musk and Hawking? Comment below!
Images from Shutterstock.