- The Defense Advanced Research Projects Agency and the Department of War are investing in Silicon Valley for military-related tech research.
- Global powers such as Ukraine, Russia and China have also implemented AI drones and other advancements in warfare.
- The Pentagon’s commitment to expand tech research continues to spark ethical and policy debates among employees, academics and policymakers.
In “Tron: Ares,” a futuristic AI-powered soldier enters the physical world to protect the tech company that created it. In some ways, the film mirrors the real world: Silicon Valley tech companies are receiving billions of dollars through contracts with the United States Department of War to make AI weapons. This technology is proving dominant in conflicts abroad, but also raises ethical concerns about AI systems being used in lethal situations.
Early tech defense partnerships date back to the 1950s, when the Defense Advanced Research Project Agency funded projects that later became everyday uses, like GPS, the internet, and voice recognition. Today, DARPA invests in drones, AI and electronic chips for the military.
“If the military wants something, they’ll pay for it,” computer science teacher Mark Kwong said. “They will always find a company that will give them what they want.”
Historically, tech companies were more resistant to projects that directly caused death on the battlefield. In June 2018, Google stated that they would not renew their contract with the DoW on Project Maven, a plan to use AI to identify battlefield targets. Ultimately, Google adopted stricter AI ethical policies, pledging to not build technology for military or surveillance purposes.
“It’s a question of how much influence they have,” senior and Model United Nations co-president Tanush Agrawal said. “The military is now running on a lot of technology and without their processors, the U.S. can’t really be at the top of its defense systems. That’s something that we should recognize and look into more: making sure that there’s a separation between the companies and the defense.”
Recently, the U.S.-China technology rivalry has intensified, with China investing over $100 billion on AI research this year. The DoW is developing its own innovative products in tandem, favoring AI investments over traditional commercial and research partnerships. In hopes of competing with China’s domestic defense, the DoW has also funded startups in Silicon Valley.
“There’s been a significant shift in Silicon Valley,” California Polytechnic State University philosophy professor Ryan Jenkins said. “They’re not going to cede ground to international competitors anymore.”
One of these startups is Anduril, which is teaming up with Meta to build military helmets with AI-enhanced visual systems. Anduril also launched the Anvil, a military quadcopter drone using AI to target and intercept other unmanned aerial vehicles. In July 2025, the company Palantir signed a $10 billion contract with the Department of War to develop AI targeting systems that can scan and identify enemies on the battlefield.
“I think these weapons are a preview of the future,” Banafa said. “It’s about not risking soldiers’ lives, it’s about competition with China, but it’s going to happen.”
Although the Pentagon has long funded research in Silicon Valley, recent global conflicts have deepened their connection. AI drone warfare has been used in the Russia-Ukraine war for years. In June, Ukraine destroyed aircraft in a Russian air base with Operation Spider Web, which used over 100 drones to strike specific locations. However, Russia has taken the lead in drone production, building an industry that powered an 800-drone assault earlier this year. Opportunities to gain an upper hand in the war are dwindling, making AI drones increasingly central to the war.
“Whether or not there are consequences of countries making AI weapons, they have to,” said Ahmed Banafa, San Jose State University professor of interdisciplinary engineering. “If they don’t, somebody else will, and then they’re going to be on the lower end.”
Many critics are concerned that these partnerships could drag tech workers into being involved in military actions, or make it difficult to attach responsibility to civilian deaths if they are caused by a robot instead of a human. On August 6, youth-led climate justice group Planet Over Profit held a protest in the Bay Area against Scale AI’s Thunderforge project, which would integrate AI into military planning operations.
“These companies are working to harm people for their own profit,” a Planet Over Profit representative said. “This AI makes it easier to kill more people.”
AI systems also frequently make mistakes, with hallucination rates as high as 79%. In war, this could mean hitting the wrong target, or killing an innocent civilian.
“In my first Model UN conference about the Cuban Missile Crisis, there was a lot of discussion over how technology can easily miscalculate,” Agrawal said. “The increase in technology and AI systems makes it so much easier for war to begin, with pretty much just the push of a button.”
Military AI supporters envision a future where automated systems reduce human involvement and battlefield casualties. In late 2024, the United States joined 57 countries to declare ethical AI in defense a global priority. The DoW claims AI can “provide answers that are well beyond the computational abilities of the human brain,” and that may be safer but more cost-effective than traditional weapons such as missiles.
“AI can get things wrong, which in war could end disastrously,” Jenkins said. “But there are less worrisome uses of AI, such as identifying tanks and vehicles rather than human combatants.”
With increased funding for AI use in the military, the Pentagon seems committed to expanding with technology firms — President Donald Trump increased the military budget by $156 billion in July, with $13.5 billion devoted to funding tech defense startups.































































