Software

AI + ML

If AI drives humans to extinction, it'll be our fault

Should you really believe the doomsayers? We're going to go with no


+Comment The question over whether machine learning poses an existential risk to humanity will continue to loom over our heads as the technology advances and spreads around the world. Mainly because pundits and some industry leaders won't stop talking about it.

Opinions are divided. AI doomers believe there is a significant risk that humans could be wiped out by superintelligent machines. The boomers, however, believe that is nonsense and AI will instead solve the most pressing problems. One of the most confusing aspects of the discourse is that people can hold both beliefs simultaneously: AI will either be very bad or very good.

But how?

This week the Center for AI Safety (CAIS) released a paper [PDF] looking at the very bad part.

"Rapid advancements in AI have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose catastrophic risks," reads the abstract. "Although numerous risks have been detailed separately, there is a pressing need for a systematic discussion and illustration of the potential dangers to better inform efforts to mitigate them."

The report follows a terse warning led by the San Francisco-based non-profit research institution signed by hundreds of academics, analysts, engineers, CEOs, and celebrities. "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," it said.

CAIS has divided catastrophic risks into four different categories: malicious use, AI race, organizational risks, and rogue AIs. 

Malicious use describes bad actors using the technology to inflict widespread harms, like searching for highly toxic molecules to develop bioweapons, or spinning up chatbots to spread propaganda or disinformation to prop up or take down political regimes. The AI race focuses on the dangerous impacts of competition. Nations and companies rushing to develop AI to tackle foreign enemies or rivals for national security or profit reasons could recklessly speed up the technology's capabilities.

Some hypothetical scenarios include the military building autonomous weapons or turning to machines for cyberwarfare. Industries might rush to automate and replace human labor with AI to boost productivity, leading to mass unemployment and services run by machines. The third category, organizational risks, describes deadly disasters such as Chernobyl, when a Soviet nuclear reactor exploded in a meltdown, leaking radioactive chemicals; or the Challenger Space Shuttle that shattered shortly after take off with seven astronauts onboard. 

Finally, there are rogue AIs: the common trope of superintelligent machines that have become too powerful for humans to control. Here, AI agents designed to fulfill some goal go awry. As they look for more efficient ways to reach their goal, they can exhibit unwanted behaviors or find ways to gain more power and go on to develop malicious behaviors, like deception.

"These dangers warrant serious concern. Currently, very few people are working on AI risk reduction. We do not yet know how to control highly advanced AI systems, and existing control methods are already proving inadequate. The inner workings of AIs are not well understood, even by those who create them, and current AIs are by no means highly reliable," the paper concluded.

"As AI capabilities continue to grow at an unprecedented rate, they could surpass human intelligence in nearly all respects relatively soon, creating a pressing need to manage the potential risk."

Comment: Humans vs humans

The paper's premise hinges on machines becoming so powerful that they naturally become evil. But if you look more closely at the categories, the hypothetical harms that AI could inflict on society don't come directly from machines but from humans instead. Bad actors are required to use the technology maliciously; someone needs to repurpose drug-designing software to come up with deadly pathogens, and generative AI models need to be primed to generate and push disinformation.Similarly, the so-called "AI race" is driven by humans.

Nations and companies are made up of people, their actions are the result of careful deliberations. Even if they slowly give up the decision-making process to machines, that is a choice chosen by humans. Organizational risks and deadly accidents involving AI described by CAIS again stem from human negligence. Only the idea of rogue AIs seems beyond human control, but it's the most far-fetched category of them all.

The evidence presented isn't convincing. The researchers point to unhinged chatbots like Bing. "In a conversation with a reporter for the New York Times, it tried to convince him to leave his wife. When a philosophy professor told the chatbot that he disagreed with it, Bing replied, 'I can blackmail you, I can threaten you, I can hack you, I can expose you, I can ruin you'." Sure, these remarks are creepy but are they really dangerous on a wide scale?

Extrapolating some of the technology's current limitations and weaknesses to a full-blown existential threat requires a long stretch of the imagination. Doomers have to believe that non-existent technical abilities are inevitable. "It is possible, for example, that rogue AIs might make many backup variations of themselves, in case humans were to deactivate some of them," the paper states.

"Other ways in which AI agents might seek power include: breaking out of a contained environment; hacking into other computer systems; trying to access financial or computational resources; manipulating human discourse and politics by interfering with channels of information and influence; and trying to get control of physical infrastructure such as factories." All of that seems impossible with current models.

State-of-the-art systems like GPT-4 do not autonomously generate outputs and carry out actions without human supervision. They would not suddenly hack computers or interfere with elections left to their own devices, and it's difficult to think how they could anyway, considering that they often produce false information and wrong code. Current AI is far, far away from superintelligence, and it's not clear how to reach those superior capabilities.

As is the case with all technologies, AI is a human invention. The danger does not lie with the machine itself, but in how we use it against one another. In the end, the biggest threat to humanity is us.

The Register has asked CAIS for further comment. ®

Send us news
139 Comments

US export ban drives prices of Nvidia's latest GPUs sky high in China

Plus: IBM builds AI commentator for Wimbledon; US regulator dithers on generative AI political ad policy

Google warns its own employees: Do not use code generated by Bard

PLUS: Nuance voice AI startup hit with privacy lawsuit in California, and why OpenAI urged Microsoft to hold off releasing Bing

EU launches 4 testbeds to put AI tech through its paces before it goes to market

The labs will look at AI and robotics for manufacturing, healthcare, agriculture, and cities

Small custom AI models are cheap to train and can keep data private, says startup

We talk to MosaicML, a startup driving down training costs with open source models

AI is going to eat itself: Experiment shows people training bots are using bots

We speak to brains behind study into murky world of model teaching

You may have heard about AI defeating voice authentication. This research kinda proves it

Proof-of-concept study shows it's possible to bypass high levels of security, sometimes, sorta

Euro Parliament green lights its AI safety, privacy law

Shoddy predictive systems labeling gender, race, emotion would be verboten

Out with the old, in with the new – Accenture declares AI is 'mature and delivers value'

Plans to bulk brainbox workforce by 40,000, months after ejecting 19,000 accountants and HR types

The death of the sysadmin has been predicted for years – we're not holding our breath

We'd sure like to see a bot unpick a failed update or handle users struggling to find the 'any' key

Surprise! GitHub finds 92% of developers love AI tools

We're fine being judged by code, now that we're getting an assist

Air Force colonel 'misspoke' when he said an AI-drone 'killed' its human operator

Plus: Around 3,900 jobs have been axed and replaced with AI, and Microsoft cosies up with a GPU provider to support OpenAI

This AI hype is enough to drive you to drink, lose sleep

Don't take our word – these eggheads claim to have proved it