Scary Forecast: Computers to Threaten Humanity Within This Century

Advances in artificial intelligence raise goose bumps among experts.

As money continues to pour into R&D of artificial intelligence, alarm bells are sounding about the growing likelihood of making computers and robots too smart for our own good.

Subscribe: Kiplinger's Tech, Energy Alerts

"Some awfully bright people are genuinely worried about AI," Gary Marcus, a research psychologist at New York University, recently wrote. Scenarios include AI systems that become self-aware and able to replicate themselves across online networks for self-preservation, becoming impossible to stop. Though such possibilities may still seem like science fiction to lay folks, experts who work in the field are having serious discussions about them. With autonomous, or self-driving, cars and other machines capable of making their own decisions, drawing closer, this question looms larger: What if AI-powered goals are not what humans intended?

New R&D work is beginning to focus on eliminating risks associated with making computers too smart. Thirty-seven projects recently received $7 million. Some of the money is coming from Tesla CEO Elon Musk, who has warned that giving rise to artificial intelligence is like "summoning the demon."

Subscribe to Kiplinger’s Personal Finance

Be a smarter, better informed investor.

Save up to 74%
https://cdn.mos.cms.futurecdn.net/hwgJ7osrMtUWhk5koeVme7-200-80.png

Sign up for Kiplinger’s Free E-Newsletters

Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more - straight to your e-mail.

Profit and prosper with the best of expert advice - straight to your e-mail.

Sign up

Among such work: Finding ways to reduce threats to financial systems, developing a code of ethics for AI systems and making robots safer. One area of continued focus will be the military’s use of autonomous killer robots, including drones. Look for both the Department of Defense and the National Science Foundation to step up and back AI safety research in coming years. Government regulators will also start to consider the potential dangers of artificial intelligence in other, broader ways. But American companies will fight tooth and nail against any regulations or restrictions that global competitors won’t have to abide by.

Why all the hand-wringing? For one thing, private companies control much of AI’s funding, mostly in secretive labs where AI work and testing occur without oversight by some of the tech world’s best minds. Facebook, for example, has hired many of the world’s top AI researchers in the field of deep learning. Google has researchers working on computer networks that mimic the brain’s complex network of neurons. Uber is ramping up R&D with the help of 40 researchers it recently poached from Carnegie Mellon University’s robotics center. Microsoft, IBM and Amazon also have a lot of skin in the game with scores of AI researchers on board. And those are just the big players; the field is also flush with many start-ups pursuing cutting-edge research.

The research has led to many advances in just the past few years. In many cases, commercialization of AI-fueled products won’t fly without protections against systems run amok: Driverless cars and robots for home use come quickly to mind. But as current and past breaches in computer security systems have shown, making systems as resistant to hackers as possible is an inherently risky business.

To be sure, developing truly advanced human-brain-like computer systems still relies on a series of mind-blowing breakthroughs, each akin to harnessing nuclear energy. Experts say systems that are as smart as humans are at least many decades away. The human brain is unique in that it has massive computational power that uses just tiny amounts of energy. So far, no computer comes close. Moreover, scientists still underestimate the human brain when it comes to breadth of intelligence.

To get machines closer to mimicking the human brain, advances are needed in neuroscience, computer chip design, machine learning and robotics, to name a few. "Each one of these breakthroughs could happen with almost no warning signs," says Stuart Russell, professor of computer science at University of California, Berkeley. Among important research projects under way: Studies to better understand how the brain actually works. Development of computer chips that can process memory and storage simultaneously, and at low power. And programming robots that learn as they go. Other breakthroughs are yet to be identified. But advanced AI systems that perform at even somewhat less than human brainpower have the potential to be incredibly robust, making it likely that within this century humankind will have to deal with AI systems that can pose serious threats to humanity.

John Miley
Senior Associate Editor, The Kiplinger Letter

John Miley is a Senior Associate Editor at The Kiplinger Letter. He mainly covers technology, telecom and education, but will jump on other important business topics as needed. In his role, he provides timely forecasts about emerging technologies, business trends and government regulations. He also edits stories for the weekly publication and has written and edited e-mail newsletters.

He joined Kiplinger in August 2010 as a reporter for Kiplinger's Personal Finance magazine, where he wrote stories, fact-checked articles and researched investing data. After two years at the magazine, he moved to the Letter, where he has been for the last decade. He holds a BA from Bates College and a master’s degree in magazine journalism from Northwestern University, where he specialized in business reporting. An avid runner and a former decathlete, he has written about fitness and competed in triathlons.