Military officials said Wednesday that the Pentagon’s mandate to speed artificial intelligence (AI) adoption has the services wrestling to balance speed and safety while managing security, governance, and workforce challenges.

The Department of Defense (DOD), which the Trump administration has rebranded as the Department of War, is moving to carry out a mandate from Defense Secretary Pete Hegseth to make the department “an ‘AI-first’ warfighting force across all domains.”

But AI is easier said than done, and adoption is a balance of risk and reward, leaders from across the Navy and Marine Corps conveyed during the AFCEA West 2026 conference in San Diego.

Balancing AI risks, benefits

Daniel Corbin, technical director of the Command, Control, Communications, and Computers division in the Office of the Deputy Commandant for Information at the U.S. Marine Corps, said that to meet the AI mandate, the department may accept greater risk.

The problem that presents, he explained, is that pushing systems onto networks too quickly could expose the broader enterprise.

“I have to look across the entire enterprise and say, how am I putting the enterprise at risk by allowing this to go on the network?” Corbin said.

For now, as his team works to determine the most secure path for adopting AI systems, they are treating AI like traditional IT systems during security certification, Corbin said, even as concerns persist about AI-specific vulnerabilities such as data injection or model manipulation after deployment.

“We’re really trying to validate the systems that we put out there,” Corbin said, describing an emphasis on confidentiality, integrity, and availability of data. “It’s really about the data.”

Confidentiality ensures only authorized users can access information, he said. Integrity addresses whether users can trust the data, and availability ensures it is accessible when needed. Each dimension carries risk, which must be weighed against operational benefit.

Continuous monitoring

Gaurang Dave, chief technology officer (CTO) for Marine Corps Systems Command, agreed that securing data is a critical step. But he warned that the nature of AI requires continuous monitoring once systems are authorized.

AI is a constantly shifting technology, making it difficult to keep pace, and many capabilities arrive as a black box, limiting visibility into how they function, Dave said. Therefore, authorization of AI systems cannot be treated as a one-time event, he explained.

“Once you get authorization, it’s not a finish line. It’s an ongoing process,” Dave said, emphasizing continuous monitoring, patching, and lifecycle risk management.

Sustained monitoring will allow the department to accelerate AI adoption in a secure manner, said Tomer Atzili, frontier AI lead in the Department of the Navy’s Office of the Chief Information Officer. He called for continuous testing of models for data drift, poisoning, and misalignment.

He noted that the department has traditionally focused on the “risk of action” while giving less attention to the “risk of inaction,” but the risk of inaction is particularly acute with AI.

“If we sit and do nothing and are slow, our adversaries are going to lap us,” Atzili said. “We need to start considering both the risk of action and the risk of inaction.”

Training the workforce

Atzili also stressed the need for education across the workforce, warning that some personnel are reluctant to use AI while others may trust it too much.

“We need to educate those people so that they are verifying things and making sure that they are using AI … in smart, responsible ways,” Atzili said.

Randall Sharo, CTO for U.S. Fleet Cyber Command, echoed Atzili’s call to educate the workforce on responsible AI use, adding that AI differs from conventional systems because of the way humans interact with it.

“You’ve got some folks that are going to cede responsibility to the model,” Sharo said. “Others will resist it entirely. The greater challenge lies in defining the human-machine relationship and ensuring systems remain aligned with evolving objectives.”

To AI or Not to AI

Robert Keisler, senior science and technology manager and director of data science and AI at Naval Information Warfare Center Atlantic, cautioned against adopting AI without clearly defined outcomes.

“We do AI for AI’s sake,” Keisler said. “We don’t ask the question, why AI?”

He urged the department to establish measures of effectiveness and performance before approving use cases and to strengthen evaluation and verification processes.

“You can’t just throw [AI] over the fence and say, ‘Hey fleet, hey Marine, you’re going to love this,’” Keisler said.

Beyond technical validation, he said, integrating AI will require changes in tactics, techniques, and procedures.

“We’re going to have to change the way we fight,” Keisler said. “And we have to do all three of these things at the same time.”

Read More About
Recent
More Topics
About
Lisbeth Perez
Lisbeth Perez is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags