Technology

‘AI defense development may gain insight from biological immune systems’: DARPA

“Novel adversarial AI defense development may gain insight and inspiration from biological systems, such as the immune system and its interactions with bacteria and viruses,” according to a DARPA proposal.

Read More: Nature is intelligent: Pentagon looks to insects for AI biomimicry design

The research funding arm of the Pentagon is worried that “adversaries” could turn AI against its programmers by “poisoning” it into doing the biddings of those it wasn’t meant to serve.

With “poisoning attacks,” attackers deliberately influence the training data to manipulate the results of a predictive model, according to IEEE.

Read More: DARPA launches AI chemistry program to develop new molecules for military use

If we look towards natural models for an analogy, it would be like how a virus could manipulate the immune system of the host so it would attack itself, or how bacteria could change the host environment so it could grow stronger and spread.

“The lack of a comprehensive theoretical understanding of ML vulnerabilities leaves significant exploitable blind spots”

The Defense Advanced Research Projects Agency (DARPA) will hold a Proposers Day on February 6, 2019 for a program aimed at developing a new generation of defenses against adversarial deception attacks on machine learning (ML) models, such as poisoning attacks and inference attacks — “where malicious users infer sensitive information from complex databases at a high level,” thus endangering the integrity of an entire database,” according to Techopedia.

DARPA’s Guaranteeing AI Robustness against Deception (GARD) program will seek game-changing research ideas to
develop theory, defenses, and testbeds leading to robust, deception-resistant ML models and algorithms.

GARD seeks to push the state-of-the-art in ML defenses beyond classification by defending via detection, location, and prediction, and beyond the standard modality of digital images by developing defenses against physical world attacks in a variety of pertinent modalities, such as video and audio.

GARD has three objectives:

  1. Develop theoretical foundations for defensible ML. These foundations will include metrics for measuring ML vulnerability and identifying ML properties that enhance system robustness
  2. Create, and empirically test, principled defense algorithms in diverse settings
  3. Construct a scenario-based evaluation framework to characterize defenses under multiple objectives and threat models such as the physical world and multimodal settings.

“The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks”

According to DARPA, “the growing sophistication and ubiquity of ML components in advanced systems dramatically increases capabilities, but as a byproduct, increases opportunities for new, potentially unidentified vulnerabilities.

“The acceleration in ML attack capabilities has promoted an arms race: as defenses are developed to address new attack strategies and vulnerabilities, improved attack methodologies capable of bypassing the defense algorithms are created.

Read More: DARPA wants to make AI a ‘collaborative partner’ for national defense

“The field now appears increasingly pessimistic, sensing that developing effective ML defenses may prove significantly more difficult than designing new attacks, leaving advanced systems vulnerable and exposed.

“Further, the lack of a comprehensive theoretical understanding of ML vulnerabilities in the ‘Adversarial Examples’ field leaves significant exploitable blind spots in advanced systems and limits efforts to develop effective defenses.”

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Reality intelligence startup Track3D raises $10M to tackle construction delays

Construction is one of the world’s most complex industries to manage. Projects run late, costs…

1 day ago

UK to force digital ID, Blair Institute claims 62% of Brits favor digital identity

Illegal immigration is the Trojan Horse of choice to deliver mandatory digital ID: perspective Using…

1 day ago

97% of CIOs, CTOs concerned about unethical use of AI at companies: Report

Since the launch of OpenAI’s ChatGPT in late 2022, use of artificial intelligence (AI) has…

2 days ago

We can’t eat it, but AI will feed the world

Since its massification in the early 2020s, AI has been slowly integrated into sectors as…

1 week ago

To monitor disinformation Von der Leyen urges European Democracy Shield, Center for Democratic Resilience

The EU, UN, WEF, and G20 all call on stakeholders to mitigate the harmful effects…

1 week ago

Trump Takes Aim at Remote Work—Is He the Movement’s Top Adversary?

Back in 2018, I wrote a story, To Kill an Outsourcing Bird. For my younger readers,…

1 week ago