Google's AI Became Highly Aggressive, Could Have Major Financial Implications | Hacked: Hacking Finance
user

Articles

Google’s AI Became Highly Aggressive, Could Have Major Financial Implications

Posted on .

Google’s AI Became Highly Aggressive, Could Have Major Financial Implications

Introduction

This article was posted on Thursday, 06:47, UTC.

Famed physicist Stephen Hawking has warned us that the continued advancement of artificial intelligence might not be as good as we’d like it to. According to him:

// -- Discuss and ask questions in our community on Workplace. Don't have an account? Send Jonas Borchgrevink an email -- //

“[The creation of artificial intelligence] will either be the best, or the worst thing, ever to happen to humanity”.

In its list of ways the world could end, LiveScience adds “robot ascension” as one, stating that “The Terminator” may be just a movie, but that killing machines aren’t that far from reality, as even the United Nations have called for a ban on killer robots.

Google’s DeepMind, a leading AI research program, has already created an agent that managed to learn independently from its own memory, allowing it to beat the world’s best Go player. Moreover, other agents are currently figuring out how to mimic human voices and even write songs.

Gathering apples

A new study by Google’s DeepMind lab doesn’t ease the fear of a robot uprising, as it shows us AI can learn to cooperate with others, and to, whenever it feels it is going to lose, adopt a “highly aggressive” strategy in order to come out on top.

// -- Become a yearly Platinum Member and save 69 USD and get access to our secret group on Workplace. Click here to change your current membership -- //

The team ran a series of tests in order to investigate the AI’s response when faced with a social dilemma, known as the prisoner’s dilemma. Essentially the goal is to see whether or not different AI agents will cooperate or compete when faced with diverse challenges. The tests involved 40 million instances of a simple computer game called “Gathering”. In it, two AI-controlled characters (agents) must gather as many apples as they possibly can.

When there were enough apples for both, everything went smoothly. As soon as scarcity set in, the agents started to use their laser beams against each other, with the goal of knocking off the other agent to gain more time to gather apples. DeepMind’s team determined the aggression was due to higher levels of complexity in the agents themselves. When two less intelligent agents competed against each other, there was a greater likelihood of peaceful coexistence ensuing, as theoretically each agent could gather an equal number of apples.

According to researchers, the more advanced AI agents began to learn from their environment and started to figure out how to successfully manipulate available resources to favor their situation. When there weren’t enough apples for both, they decided to eliminate the competition.

The video shows us how trigger-happy the agents became, even though the only advantage of hitting one another was knocking the opponent out of the game for a small amount of time:

Joel Z. Leibo, a team member at DeepMind, told Wired:

“This model … shows that some aspects of human-like behavior emerge as a product of the environment and learning.”

Hunting prey

Three AI agents were then tasked to play a second game, called “Wolfpack”. In it, two agents played as wolves, and one as prey. The game encourages cooperation, as the wolves get a bigger reward if the prey is captured when both are near it, regardless of which one tackles it.

As you might have guessed, the AI-controlled agents did work together to catch the prey. They learned from their environment: in one example, an agent corners the prey and then waits for the other one to join, so they can get a greater reward.

When less-intelligent AI agents played the game, there were more instances in which they used “lone wolf” tactics to capture the prey. Clearly, the more sophisticated the AI agents get, the more they’ll think about their own self-interests.

The abstract presented by DeepMind does not speculate about the future of advanced AI, but there seems to be evidence showing that things can go very wrong if their objectives are not properly aligned with the goal of benefitting us, humans.

AI is coming for our apples

Although these were two simple videogames, it shows us just how an AI agent can act selfishly, and not for everyone’s benefit. A team of two Oxford academics released in 2013 a paper claiming that roughly 47% of jobs in America were at “high risk” of being automated within the next 20 years. The disruption is going to dramatic. Tesla and SpaceX CEO, Elon Musk, estimates things are going to accelerate faster than expected:

“The most near term impact from a technology standpoint is autonomous cars … That is going to happen much faster than people realize and it’s going to be a great convenience”

Robots are already capable of doing complex tasks. If you routinely read the news, chances are you have already read something that was written by a bot, as they are capable of writing reports (don’t worry, on Hacked.com we will always use humans…)

Even highly skilled professionals are at risk. An IBM bot, named Watson, has beaten doctors at their own job, as it managed to crack a medical mystery and perform a life-saving diagnosis. AI’s ability to learn means we won’t be able to compete – as humans we are inherently flawed, and there’s nothing we can do about it.

Given the AI’s signs of greed, it is possible that AI will become over-competitive in the job market. Soon, its agents might attempt to spy on corporations in order to get inside information, or worse, hack important systems to create havoc. Since bots can analyze data much faster than us, they could manipulate financial markets in order to favor their own goals.

Since our whole economic system is based on proper management of limited resources, an economic crisis could lead to financial chaos, in which AI agents would behave just like they did in the “Gathering” gameplay video. They would use their laser beams on whatever stood in front of them, so they could gather as much resources as possible.

Some believe the rise of AI is going to create new types of jobs for humans, but this theory can be opposed with a historical example:

(Images from Elecktrek.)

Horses used to do a wide variety of jobs, and technological improvements only made their lives easier – they even stopped being used in battles. Yet, there are barely any jobs for horses today. We barely see them in urban areas anymore.

An AI kill switch

This may sound a little farfetched, but even Google employees believe things can go wrong with AI that can learn. That’s why they have been developing an AI kill switch for the past few years.

The kill switch would ultimately allow humans to remain in charge, by giving us a way to interrupt their ability to learn – without giving the AI a chance to avoid or manipulate interruptions. In theory, the chance to stop agents from continuing harmful sequences of actions sounds great, but even those in charge of the kill switch research admit the system isn’t going to be foolproof.

As a matter of fact, we have already seen how AI agents can override our input. Back in 2013, an AI agent was taught how to play Tetris. Even though it wasn’t very good at it, the AI learned that by pausing the game forever, it wasn’t going to lose – and that is exactly what it did.

The AI played its only winning move:

Final thoughts

At this point it is clear robots armed with artificial intelligence are going to be a large part of the future, but that doesn’t necessarily mean things are going to be that bad for us humans. The lack of jobs doesn’t mean we will all starve to death.

AI agents can communicate, learn, and work much faster than us. Autonomous cars, for example, can eliminate the need for traffic signs, as cars will communicate with one another directly, for example. This type of efficiency will mean production will be at an all-time high, beyond the scope of our imagination.

That being said, if our (potential) future AI overlords decide to keep us around, there will be universal wages for every human, according to Elon Musk. As humans, we just need to do things right and avoid creating AI that doesn’t want to keep us around; we need to make it work with us to catch prey, not blast us with laser beams to collect all the apples itself.

Also Read: The robot that takes your job should pay taxes, says Bill Gates 

Elon Musk has created a research initiative named OpenAI. It is dedicated to work with the ethics of AI, and its founders have in the past stated:

“AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case, they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.”

Some, including Elon Musk, believe the only way humans can stay relevant in the future is by becoming cyborgs. A “neural lace” that fits into our brain is allegedly already being created, and it would give us additional cognitive and sensory abilities.

Yet, this type of technology is still far away, as all we have done so far is use brain-computer interfaces to help restore motor control for paralyzed patients, and enable communication for patients with brain injuries.

What are your thoughts? Will AI be useful or will it destroy our economic and social lives?

Important: Never invest money you can't afford to lose. Always do your own research and due diligence before placing a trade. Read our Terms & Conditions here.



Feedback or Requests?

Francisco Memoria

Francisco Memoria

Cryptocurrency enthusiast, writing about financial freedom and the future of money

Comments
  • user

    AUTHOR CCI-Berlin

    Posted on 8:51 am March 30, 2017.

    That’s exactly what I’m thinking. I follow out of that, I’m not mad, thank you.

  • View Comments (1) ...
    Navigation
    The team:
    Dmitriy Lavrov
    Analyst
    Dmitriy Lavrov is a professional trader, technical analyst and money manager with 10 years of trading experience. He covers Forex, Commodities and Cryptocurrencies. He is among the top 10 most Read More
    Jonas Borchgrevink
    Founder
    Jonas Borchgrevink is the founder of Hacked.com and CryptoCoinsNews.com. He is a serial entrepreneur, trader and investor. He shares his own personal journey on Hacked.com. // -- Discuss and ask Read More
    Mate Csar
    Analyst
    Trader and financial analyst, with 10 years of experience in the field. An expert in technical analysis and risk management, but also an avid practitioner of value investment and passive Read More
    Mati Greenspan
    Analyst
    Senior Market Analyst at Etoro.com. // -- Discuss and ask questions in our community on Workplace. Don't have an account? Send Jonas Borchgrevink an email -- // Important: Never invest Read More
    Rakesh Upadhyay
    Analyst
    Rakesh Upadhyay is a Technical Analyst and Portfolio Consultant for The Summit Group. He has more than a decade of experience as a private trader. His philosophy is to use Read More
    Pamela Meropiali
    Account Manager
    Pamela Meropiali is responsible for users on Hacked.com. // -- Discuss and ask questions in our community on Workplace. Don't have an account? Send Jonas Borchgrevink an email -- // Read More
    Joseph Young
    Journalist
    Joseph Young is a finance and tech journalist & analyst based in Hong Kong. He has worked with leading media and news agencies in the technology and finance industries, offering Read More
    Date Asset Rec. Entry Price Current Price Status Initial Stop-Loss…