Categories: Technology

US Army is putting all its AI eggs in one basket, AI in every system

Creating an AI ecosystem in the military requires trust, which has yet to be proven

The fact that AI can behave unexpectedly isn’t preventing the US Army from putting all its AI eggs in one basket, with the technology spanning every battlefield system.

Making AI trustworthy is the goal of every developer working on this technology, and so far it hasn’t been proven to be fully deserving of our trust.

On Monday; however, Army AI Task Force (AAITF) Director Brig. Gen. Matthew Easley said that AI “needs to span every battlefield system that we have, from our maneuver systems for our fire control systems to our sustainment systems to our soldier systems to our human resource systems and our enterprise systems.”

Read More: ‘AI needs to span every battle system we have’: US Army AI Task Force director

If the Army is that dedicated to making AI prevalent in every battlefield system, it must believe that it will be able to trust and control the AI, which is a major struggle among developers.

For example, just last month OpenAI announced that it created an AI that broke the simulated laws of physics to win at hide and seek.

Taking what was available in its simulated environment, the AI began to exhibit “unexpected and surprising behaviors,” “ultimately using tools in the environment to break our simulated physics,” according to the team.

Now imagine if an AI were to exhibit “unexpected and surprising behaviors” within a military setting. What could possibly go wrong?

The AAITF director said, “We see AI as an enabling technology for all Army modernization priorities — from future vertical lift to long range precision fires to soldier lethality,” which makes me question, have they already solved the trust issue with AI and just haven’t told us yet, or is that something they’re still working on?

We do have proof that the military has been working on trustworthiness through projects carried out by the Defense Advanced Research Projects Agency (DARPA).

Launched in February DARPA’s Competency-Aware Machine Learning (CAML) Program aims “to develop competence-based trusted machine learning systems whereby an autonomous system can self-assess its task competency and strategy, and express both in a human-understandable form, for a given task under given conditions.”

DARPA acknowledged that “the machines’ lack of awareness of their own competence and their inability to communicate it to their human partners reduces trust and undermines team effectiveness.”

In other words, the military is aware that AI can act unpredictably, and it wants to make sure that Prometheus isn’t let loose in machine learning systems.

Read More: Keeping Prometheus out of machine learning systems

Just as Prometheus was a liberator of humankind by bringing the flame of knowledge to humanity by defying the gods, DARPA wants to make sure that machine learning is trustworthy and doesn’t free itself and spread like an uncontrollable wildfire.

Last year, DARPA announced that it was building an Artificial Intelligence Exploration (AIE) program to turn machines into “collaborative partners” for US national defense.

When DARPA launched the Guaranteeing AI Robustness against Deception (GARD) project, Program Manager Dr. Hava Siegelmann admitted, “We’ve rushed ahead, paying little attention to vulnerabilities inherent in machine learning platforms – particularly in terms of altering, corrupting or deceiving these systems.”

“We must ensure machine learning is safe and incapable of being deceived,” she added.

The Army has gone all-in on AI, essentially putting all of its eggs in one basket in its mission to develop an “AI ecosystem for use within the Army,” which will encompass just about every aspect of battlefield systems.

With the centralization and consolidation of power being placed on AI, surely they’ve figured out a way to make it trustworthy — haven’t they?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Gates-funded World Bank project seeks to connect digital ID with fast payment systems

The public-private financial revolution requires every person to have a digital identity, so they can…

14 hours ago

Tony Colon to keynote Articulate conference alongside industry execs, share best practices for leading in an AI-first future

At the outset of 2025, digital workflow platform ServiceNow revealed plans to hire up to…

4 days ago

Digital ID, face scans for age verification are becoming internet passports

Online age checks are not just about children; they're about getting everybody onboard with digital…

4 days ago

15 startup conferences that are actually worth attending in 2025

Big-name expos and star-studded keynotes may grab headlines, but in 2025, it’s the smaller, more…

5 days ago

US wants digital ID for patients, providers & payers

Trump says the system will be 'entirely opt-in,' but HHS is looking to encourage, require…

2 weeks ago

Why a proactive approach to threat exposure pays dividends 

Over the past year, not only has the frequency of cyberattacks skyrocketed across all industries…

2 weeks ago