Palantir’s ‘Workflow’ of AI-Directed Death
A Pentagon presentation explains how a "few clicks" with Project Maven finalizes target selection, which may have included an Iranian girls' elementary school.
(Image: Adobe Stock)
The following story is co-published with Matt Bivens’ Substack newsletter, The 100 Days.
In the brief video below, a Pentagon official shows off the government’s fancy new AI-guided target-and-destroy computer program.
“This is Maven Smart System,” says the Pentagon’s chief artificial intelligence officer, “Palantir’s software-as-a-service product that we are deploying across the entire Department [of War].”
“As you can see, it’s not just one data feed, it’s multiple,” he continues. “The single visualization tool allows you to select-deselect different types of data, look at different approaches to data, but more importantly — action, from the same system that you’re trying to develop your workflows around. Once you have a detection that you want to actually move into a targeting workflow, this is what we do: left click, right click, left click .…”
Eventually, a nominated target (“a detection”) gets moved to a new “workflow” called CoA (for course of action) generation, in which the AI helps choose the best available weapon. From there “we can move directly into ‘how do we action that target?’,” i.e., how do we blow it up or kill it.

“So, we’ve gone from identifying the target, to now coming up with a course of action, to now actioning that target — all from one system. This is revolutionary. [Before,] we were having this done in about eight or nine systems, where humans were literally moving ‘detections’ left and right in order to get to our desired end state, in this case, actually closing a kill chain.”
Before Maven, humans were doing too much of the mental lifting. Now, Maven is doing more of that, which means a contemplated killing can be more briskly accomplished.
Perhaps the humans will eventually be taken out of these workflows, and the computer itself could “close a kill chain.”
But not quite yet. “There’s always a human in the loop, so there is always a human that makes the ultimate decision,” Palantir’s head of U.K. and European operations reassured the BBC. “That’s the current setup.”
The current setup. Right.
Eight years ago, Google bailed out of Project Maven. Thousands of its employees had threatened to resign at the very idea of creating a “GoogleEarth for War” — a zoom-in-or-out video-and-satellite feed that could, with just a few mouse clicks, destroy or kill what it sees.
Just several weeks ago, Anthropic, another tech company with halfhearted “Don’t Be Evil” pretensions, got into a similar public row with the Pentagon. Anthropic said it did not want the government to use its Claude AI for “mass surveillance” and “fully autonomous weapons.”
The company expressed fear that the Pentagon’s planned AI system might indeed end up off the leash, out there making its own decisions about whom to target, or when and how to kill them.
Given the rapidly accumulating warning signs about rogue AI, this seems like a legitimate question, and one worthy of careful consideration and debate. But it was instead shut down instantly by our U.S. president — who reacted as if the asking of such questions represented a personal betrayal of his own awesomeness. Donald Trump banned the company from all federal contracts, and went on a Truth Social freakout in which he threatened “criminal consequences” for the “Leftwing nut jobs at Anthropic”.

One day later, the United States and Israel launched our sneak-attack assassination of the Iranian leadership in a massive bombing campaign that also accidentally killed about 175 people, most of them children, at an elementary school.
Days later, Project Maven — brought to us by the tough-talking, CIA-seed-money-funded sociopaths at Palantir — was publicly embraced by both the Pentagon and then by all of NATO. We’re told that the Maven Smart System was involved in many of the thousands of missile strikes visited on Iran during our month-old war there.
No one will say if Maven was involved in the school strike itself — which pretty much tells you right there that it was. Otherwise, they’d deny it. So, when that undeclared war opened with more than 1,000 bombs dropped in a single day, an elementary school was likely nominated as a detection on the Maven dashboard, then left click, right click, left clicked into a workflow, then actioned to an ultimately undesired end state.
Rock Solid JournalismIn 2026, amid chaos and the nonstop flurry of headlines, Truthdig remains independent, fact-based and focused on exposing what power tries to hide.
Support Independent Journalism.
You need to be a supporter to comment.
There are currently no responses to this article.
Be the first to respond.