The controversial inclusion of Xai by Elon Musk in a set of contracts in the Department of Defense worth up to $ 200 million was a decision of the purposes of the game under the Trump administration, says a former Pentagon employee. The contracts had been in process for months, with the planning of appointments for the Biden Administration, said Glenn Parham, a former Pentagon employee who worked in the early stages of the initiative.
Before Parham took a government purchase in March, he said, contract planning had not included XAI. Parham was a generative technical leader of artificial intelligence in the Pentagon Digital and Artificial Intelligence Office and helped negotiate agreements and integrate AI in the initiatives of the Department of Defense.
“There had not been a single discussion with anyone from X or Xai, until the moment I left,” he said. “He came out of nowhere.”
The Pentagon ended up announcing contracts with four companies last week: Anthrope, Google, Openai and Xai. Each contract has a $ 2 million floor and a $ 200 million ceiling, with the amount of payment depending on how each association goes. (Operai’s contract was initially announced last month). Including Musk’s XAI questions raised among artificial intelligence experts.
Days before the announcement, Grok, the XAI chatbot, had gone to an anti -Semitic diatribe that the company fought to control. The company was also launching animated “partners” of animated that can be sexually suggestive and violent. Musk said he merged X and XAI in March.
In summary, Xai did not have the type of reputation or history that generally leads to lucrative government contracts, even when Musk had a long history of working with the government. Critics wondered if XAI’s models were reliable enough for government work.
Last Tuesday, the leader of the Senate minority, Chuck Schumer, DN.Y., described the contract as “incorrect” and “dangerous” on the Senate’s floor, presenting Grok’s anti -Semitic incident, during which he called himself “Mechahitler.” He insisted that “the Trump administration must explain how this happened, the parameters of the agreement and why they think that our national security is not worth complying with a higher standard.”
Parham said the program, which is announced as an association between the Department of Defense and the American technology companies that are on the border of artificial intelligence development, originally focused on most established companies, including Openai and Anthrope, which, in addition to being older than XAI, also have long -term agreements with large computing companies in the cloud and established relationships with the military.
It is not clear what led Pentagon officials to add XAI to the combination of contractors since March. The main digital and artificial intelligence office of the department, which announced the contracts, did not answer written questions about why Xai chose, but the Pentagon said in a statement that the anti -Semitism episode was not enough to disqualify it.
“Several border models have produced questionable results in the course of their continuous development and the department will manage the risks associated with this emerging technology area throughout the prototype process,” the Department of Defense told NBC News in a statement on Friday.
“These risks did not justify excluding the use of these capacities as part of the DOD prototype creation efforts,” he said.
The department said that “the models of the border”, by their nature, are at the forefront and, therefore, offer opportunities and risks.
XAI did not respond to comments requests on Friday And Monday.
Including XAI adds a wrinkle to Musk’s complicated relationship with the federal government. Even before Musk’s time as a White House advisor this year for President Donald Trump, his commercial empire already had deep links within the government, including contracts for Musk’s Rocket Company, Spacex. Musk and Trump are now locked in a new dispute, and Musk has promised to launch a third political party focused on reducing federal debt. He repeated the vote as recently as on July 6, although he does not seem to have taken concrete public measures to configure it. Trump has threatened Musk government contracts during the dispute.
Some experts said they could see why the Department of Defense might want to include XAI as a partner, despite their defects.
“I think the department benefits when it is committed to as many organizations as possible,” said Morgan Plummer, director of American policies for responsible innovation, a defense group that generally favors a midpoint in the regulation of AI.
Parham said that the idea of the $ 800 million program is prior to the Trump administration and that the work began in October after President Joe Biden issued an executive order on AI and National Security. He said he worked on it for about five months before leaving and that, in total, he spent almost three years in the defense department working at AI.
Contracts with the four artificial intelligence companies also significantly deepen the military relationship with the most bustling emerging technologies. In exchange for the millions of dollars, the military will use the large language model of each company (LLM), which for many users often takes the form of a chatbot. Experts said that the military will use the LLMs for a variety of purposes, from more mundane tasks, such as summarizing emails to more complicated uses such as translating languages or analyzing intelligence.
Other AI projects headed by the Department of Defense include Project Maven, a system that integrates a large number of data and data sources with automatic learning, to visualize and use during the conflict.
Within the AI industry, XAI’s capabilities are discussed in work. Grok writes highly at some reference points of artificial intelligence, as a reference point called “the last examination of humanity”, which consists of questions presented by experts in the field. But his recent Dalliance with neo -Nazism, and, before that, with racial relations in Musk’s native South Africa, made the chatbot an object of mockery in the industry and among the broader public.
“Grok is probably the least safe for these systems. He is doing some really strange things,” said the critic of the Gary Marcus, Professor Emeritus of Psychology at the University of New York.
Marcus pointed out Grok’s ideological diatribes and XAI’s decision not to publish security reports that have become industry standards for the main AI models.
Parham said he believes that XAI may need more time than the other three Pentagon contractors so that its technology is completely available to the military. He said that other companies, including Anthrope and Openai, have already gone through a long process of revision and compliance with the government to have their software, including their application programming interfaces, which the encoders use to build at the top of the LLMs, authorized for use. He said that, until March, when he left, Xai had not done the same.
“I think it will take them much longer [get] His models were implemented in government environments, “he said.” It is not impossible. It’s just that they are far away, far from everyone else. “
Parham said the approval process of Anthrope and OpenAi took more than a year of the documentation presented to the authorization granted.
The use of the Pentagon of Commercial LLMS has generated some criticisms, partly because the models of AI are generally trained in huge data sets that may include personal information on the open web. Mixing that information with military applications is too risky, said Sarah Myers West, co-executive director of the AI Now Institute, a research organization.
“It presents vulnerabilities of security and privacy in our critical infrastructure,” he said.
XAI is a relatively young startup. Musk began in 2023 after having co -founded Operai years before and then had a fight with his CEO, Sam Altman.
Some IA and defense systems said they were shocked by the recent anti -Semitic collapse of Grok and wondered if something similar could resort as part of the use of the government.
“He would have some concerns associated with security based on the launch of his most recent model,” said Josh Wallin, who investigates the intersection of AI and the army in the center for a new US security, a group of democratic experts.
Wallin said that Grok’s anti -Semitism diatata demonstrates an unpredictable or risky behavior potential, such as presenting false or misleading information as fact, known as hallucinations.
“Let’s say that you are automatically generating reports from different sources of intelligence or is producing a daily report for a commander. There would be concern about whether what you are getting is hallucination,” he said.