Software

AI + ML

Experts scoff at UK Lords' suggestion that AI could one day make battlefield decisions

Conservative peer admits he can't tell between dogs and cats either


Experts in technology law and software clashed with the UK House of Lords this week over whether it was technically possible to hand responsibility for battlefield decisions to AI-driven weapons.

During the AI in Weapon Systems Committee hearing on Thursday, the Lords struggled to draw the experts into the idea that it might be eventually possible, or introduced cautiously, apparently concerned about losing ground in the introduction of AI in warfare.

Lord Houghton of Richmond said the committee wanted to recommend how to progress in some kind of legal framework.

The former Chief of the Defence Staff of the British Armed Forces asked for comment on whether distinction and proportionality can eventually be discharged autonomously.

Professor Christian Enemark, professor of international relations, University of Southampton, responded: "Only humans can do discrimination, only humans can do proportionality, and the autonomous discharging by a nonhuman entity is a philosophical nonsense, arguably."

Lord Houghton replied: "Incrementally, it may be that advances in the technology will advance the envelope under which those sorts of delegations can be made."

AI ethics expert Laura Nolan, principal software engineer with reliability tooling vendor Stanza Systems, argued that battlefield decisions by AI could not assess the proportionality of a course of action.

"You need to know the anticipated strategic military value of the action and there's no way that a weapon can know that," she said. "A weapon is in the field, looking at perhaps some images, some sort of machine learning and perception stuff. It doesn't know anything. It's just doing some calculations, which don't really offer any relation to the military value."

Nolan added: "Only the commander can know the military value because the military value of a particular attack is not purely based on that conflict, local context on the ground. It's the broader strategic context. It's absolutely impossible to ask a weapon on the ground and make that determination."

Taniel Yusef, visiting researcher at Cambridge University's Centre for the Study of Existential Risk, said the simple algorithms for classifying data points which might identify targets could be shown to mistake a cat for a dog, for example.

"When this happens in the field, you will have people on the ground saying these civilians were killed and you'll have a report by the weapon that feeds back [that] looks at the maths," she said.

"The maths says it was a target... it was a military base because the math says so and we defer to maths a lot because maths is very specific and ... the maths will be right.

"There's a difference between correct and accurate. There's a difference between precise and accurate. The maths will be right because it was coded right, but it won't be right on the ground. And that terrifies me because without a legally binding instrument enshrining that kind of meaningful human control with oversight at the end that's what we'll be missing."

"It's not technically possible [to make judgements about proportionality] because you can't know the outcome of a system [until] it has achieved the goal that you've coded, and you don't know how it's got there."

Conservative peer Lord Sarfraz interjected: "The other day, I saw a dog which I thought was a cat."

"I assume you didn't shoot it," Yusef replied. ®

Send us news
45 Comments

US export ban drives prices of Nvidia's latest GPUs sky high in China

Plus: IBM builds AI commentator for Wimbledon; US regulator dithers on generative AI political ad policy

Google warns its own employees: Do not use code generated by Bard

PLUS: Nuance voice AI startup hit with privacy lawsuit in California, and why OpenAI urged Microsoft to hold off releasing Bing

If AI drives humans to extinction, it'll be our fault

Should you really believe the doomsayers? We're going to go with no

Open source licenses need to leave the 1980s and evolve to deal with AI

Time to get with the program... before artificial intelligence does

Yeah, Rishi, it's AI that'll make Britain great again

DeepMind, OpenAI, Anthropic promise to give Blighty priority access to their models for safety research

EU launches 4 testbeds to put AI tech through its paces before it goes to market

The labs will look at AI and robotics for manufacturing, healthcare, agriculture, and cities

<abbr title="Bastard Operator from Hell">BOFH</abbr>: Cough up half a grand and we'll protect you from AI

You get job security, we get lager and deep-fried food

Small custom AI models are cheap to train and can keep data private, says startup

We talk to MosaicML, a startup driving down training costs with open source models

Snowflake's finding NeMo to train custom AI models

Submerge an Nvidia LLM 20,000 leagues under the data lake

US mulls tightening ban on AI chips to China

To protect against weapons or economic interests – either way, it's bad news for some vendors

Databricks snaps up MosaicML to build private, custom machine models

Acquisition means for both parties get a shot at leading the roll-your-own AI market

Microsoft Azure OpenAI lets enterprises feed corporate secrets to ChatGPT

Apparently you're all dying to do this?