The Young Fear the Coming Machine Overlords
More and more sources suggest that there is a possibility that, sometime in the next few decades, technological advances might come up with Artificial “General” Intelligence. Many people seem to fear that Artificial General Intelligence (often abbreviated as AGI) could conceivably generate its own goals. In our culture, we tend to regard non-human entities that generate their own motives and goals as dangerous. That could be a prejudice or a fallacy, but it doesn’t really matter.
An increasing number of scientists working in the field agree that the development of AGI well before 2050 seems now a plausible possibility. That means that most people alive will have to cope with a world with ubiquitous Artificial Intelligence, perhaps much smarter than us.
At the very least, that will entail a painfully renegotiated value of labor and investment. Human labor will be worth less as machines replace labor or make it less competitive. At the same time, the capital invested in machines and related infrastructure will pay off handsomely.
Human Life Could Be at Risk from Artificial Intelligence
Why do some people conclude that AGI could conceivably lead to an apocalyptic outcome? The AGI might become powerful machine overlords, and t turns out that especially young people have simple reasons to fear that outcome.
Earlier this month prominent physicist Stephen Hawking warned “the development of full artificial intelligence could spell the end of the human race.” YouGov reports that younger Americans are much more likely than older Americans to say that artificial intelligence will bring about the end of the human race. 38% of under-30s think that human life would be at risk from artificial intelligence, something only 17% of over-65s agree with.
Older people have lived their lives. In a utilitarian sense, if AGI changes the world in some way, the results of those changes might conceivably be good, and it might also be pretty bad. Older people who have already lived most of their lives have fewer reasons to worry about bad outcomes, and often welcome changes that could conceivably improve the odds for them, no matter how remote.
For most old people who are not overly religious, there is a chance that the sudden emergence of robust AGI would do three things – it could improve their lives by granting them cheaper automated service systems., and it could raise the value of their investments. But most important – AGI might somehow contribute to the development of life extension and rejuvenation treatments.
Young people have a different outlook. Being young comes with more access to sex and other hedonistic pleasures, and with expectations of a fairly long life ahead. The young would prefer a life not interrupted by sudden destabilizing influences exerted by AGI. They are more inclined to want the world as they know it to continue in a somewhat predictable manner, and offer them the same or steadily improving opportunities. The young are not so much interested in automation and machines that make a small minority increasingly more powerful and affluent, since they don’t have enough money to invest.
This is a classical example in deciding which development in the world are useful, and which should be feared as threats. There is a generational conflict in the making: we are increasingly witnessing how the political machinery of our world can be purchased by rich investors. Politicians no longer have things like “values” – now they have an agenda that is decided by their sponsors. Essentially, the typical politician has become but a useful tool for some sponsor.
The young are more likely to realize that, whereas older people are not likely to care. The same holds for AGI machines that might do whatever they want to do, in which case anything can happen. For old people “anything can happen” is somewhat equivalent to spending a few hundred at the Casino. You might win, or you might lose. If you lose, the machines might conceivably become ultra-efficient terminators and eradicate the human species. But they might also do something completely different.
AGI machines could efficiently achieve the goals of their creators and investors. In that case the young have valid reasons to be concerned, judging how much money in the world is currently spent on destruction rather than creation. AGI could very well reflect the values of its biggest sponsors. An AGI developed by Blackwater, The Pentagon, and Goldman-Sachs, could very well cause a lot of human death and suffering.
Images from Wikimedia Commons.