If AI drives humans to extinction, it'll be our fault

Should you really believe the doomsayers? We're going to go with no

+Comment The question over whether machine learning poses an existential risk to humanity will continue to loom over our heads as the technology advances and spreads around the world. Mainly because pundits and some industry leaders won't stop talking about it.

Opinions are divided. AI doomers believe there is a significant risk that humans could be wiped out by superintelligent machines. The boomers, however, believe that is nonsense and AI will instead solve the most pressing problems. One of the most confusing aspects of the discourse is that people can hold both beliefs simultaneously: AI will either be very bad or very good.

But how?

This week the Center for AI Safety (CAIS) released a paper [PDF] looking at the very bad part.

"Rapid advancements in AI have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks," reads the abstract. "Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them."

The report follows a terse warning led by the San Francisco-based non-profit research institution signed by hundreds of academics, analysts, engineers, CEOs, and celebrities. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," it said.

CAIS has divided catastrophic risks into four different categories: malicious use, AI race, organizational risks, and rogue AIs. 

Malicious use describes bad actors using the technology to inflict widespread harms, like searching for highly toxic molecules to develop bioweapons, or spinning up chatbots to spread propaganda or disinformation to prop up or take down political regimes. The AI race focuses on the dangerous impacts of competition. Nations and companies rushing to develop AI to tackle foreign enemies or rivals for national security or profit reasons could recklessly speed up the technology's capabilities.

Some hypothetical scenarios include the military building autonomous weapons or turning to machines for cyberwarfare. Industries might rush to automate and replace human labor with AI to boost productivity, leading to mass unemployment and services run by machines. The third category, organizational risks, describes deadly disasters such as Chernobyl, when a Soviet nuclear reactor exploded in a meltdown, leaking radioactive chemicals; or the Challenger Space Shuttle that shattered shortly after take off with seven astronauts onboard. 

Finally, there are rogue AIs: the common trope of superintelligent machines that have become too powerful for humans to control. Here, AI agents designed to fulfill some goal go awry. As they look for more efficient ways to reach their goal, they can exhibit unwanted behaviors or find ways to gain more power and go on to develop malicious behaviors, like deception.

"These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate. The inner workings of AIs are not well understood, even by those who create them, and current AIs are by no means highly reliable," the paper concluded.

"As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon, creating a pressing need to manage the potential risk."

Comment: Humans vs humans

The paper's premise hinges on machines becoming so powerful that they naturally become evil. But if you look more closely at the categories, the hypothetical harms that AI could inflict on society don't come directly from machines but from humans instead. Bad actors are required to use the technology maliciously; someone needs to repurpose drug-designing software to come up with deadly pathogens, and generative AI models need to be primed to generate and push disinformation.Similarly, the so-called "AI race" is driven by humans.

Nations and companies are made up of people, their actions are the result of careful deliberations. Even if they slowly give up the decision-making process to machines, that is a choice chosen by humans. Organizational risks and deadly accidents involving AI described by CAIS again stem from human negligence. Only the idea of rogue AIs seems beyond human control, but it's the most far-fetched category of them all.

The evidence presented isn't convincing. The researchers point to unhinged chatbots like Bing. "In a conversation with a reporter for the New York Times, it tried to convince him to leave his wife. When a philosophy professor told the chatbot that he disagreed with it, Bing replied, 'I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you'." Sure, these remarks are creepy but are they really dangerous on a wide scale?

Extrapolating some of the technology's current limitations and weaknesses to a full-blown existential threat requires a long stretch of the imagination. Doomers have to believe that non-existent technical abilities are inevitable. "It is possible, for example, that rogue AIs might make many backup variations of themselves, in case humans were to deactivate some of them," the paper states.

"Other ways in which AI agents might seek power include: breaking out of a contained environment; hacking into other computer systems; trying to access financial or computational resources; manipulating human discourse and politics by interfering with channels of information and influence; and trying to get control of physical infrastructure such as factories." All of that seems impossible with current models.

State-of-the-art systems like GPT-4 do not autonomously generate outputs and carry out actions without human supervision. They would not suddenly hack computers or interfere with elections left to their own devices, and it's difficult to think how they could anyway, considering that they often produce false information and wrong code. Current AI is far, far away from superintelligence, and it's not clear how to reach those superior capabilities.

As is the case with all technologies, AI is a human invention. The danger does not lie with the machine itself, but in how we use it against one another. In the end, the biggest threat to humanity is us.

The Register has asked CAIS for further comment. ®

Similar topics

TIP US OFF

Send us news


Other stories you might like