## Colophon tags:: [[process]] [[&article]] [[* 2026 February Anthropic DoW OpenAI]] url:: https://weaponizedspaces.substack.com/p/the-ai-surveillance-debate-is-missing date:: [[2026-03-05]] %% title:: The AI Surveillance Debate Is Missing the Most Dangerous Part type:: [[clipped-note]] file:: published:: 2026-03-05T00:08:19+05:30 author:: [[@Caroline Orr Bueno, PhD]] [Click to Archive](https://web.archive.org/save/https://weaponizedspaces.substack.com/p/the-ai-surveillance-debate-is-missing) %% archive:: ## Notes short:: - ## Full text ## The AI Surveillance Debate Is Missing the Most Dangerous Part - 20260305 - fulltext --- publish: false creator: Prateek Waghre --- ## Full Text %% ### The partnership between government and AI companies is advancing faster than the legal frameworks designed to constrain surveillance. Despite reassurances from OpenAI CEO Sam Altman and top officials at the Department of War (DoW) regarding the terms of the deal they reached to deploy the AI company’s technology in the military’s classified systems, a review of the contract reveals a massive blind spot that has been largely overlooked, but which may usher in a quiet yet dramatic expansion of domestic surveillance capabilities—all while remaining within the confines of what is technically legal. The loophole stems from the landscape of the current legal environment surrounding U.S. government surveillance. To date, laws in this area have almost exclusively focused on the collection and storage of information, specifying things like **what** data may be collected, **whose** data may be collected, what **authorization** is needed, and what **minimization procedures** must be applied (e.g., limits on how long data may be retained; access controls; query restrictions; etc). But this part of the surveillance pipeline is not necessarily where AI will have the biggest impact. When AI is integrated into the *downstream* processes of the surveillance pipeline—e.g., things like analytics and inferential outputs—it is likely to radically expand both the type and quantity of insights that can be extracted about individual persons, as well as the sensitivity of those inferences. [As I touched on in a piece I published last week](https://weaponizedspaces.substack.com/p/e6622699-191b-4474-a48b-a2db329f96c4), certain features of AI—such as its advanced capabilities in pattern recognition, anomaly detection, and large-scale data analysis—can dramatically increase the informational yield derived from existing surveillance datasets. Using AI tools, it is [possible](https://digitalcommons.law.uw.edu/wlr/vol89/iss1/2/) to infer sensitive attributes from non-sensitive data, along with performing other advanced modeling and analytic techniques, including the reconstruction of data and generation of real-time and even predictive intelligence. This means that even if no additional data were collected at all beyond what is being collected right now *and* no new collection methods were used, the integration of AI into the analysis process would still yield expanded inference capabilities and allow the government to know far more about you based on the data it already has. Perhaps even more troubling is the fact that even those involved in developing AI models often cannot explain how the models produce certain output and make certain inferences, which means that the multi-billion-dollar U.S. surveillance apparatus could ultimately come down to a matter of “trust me, bro.” #### What’s (Not) In The Contract Before we go any further, let’s take a brief look at the signed agreement and where it stands today. The [contract](https://openai.com/index/our-agreement-with-the-department-of-war/) between the DoW and OpenAI specifies that “The Department of War may use the AI System for all lawful purposes, consistent with applicable law, operational requirements, and well-established safety and oversight protocols.” It goes on to say that any surveillance activities involving the company’s AI will comply with existing laws like the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, and that the AI “will not be used for unconstrained monitoring of U.S. persons’ private information.” Since signing the deal with the Pentagon on Friday, OpenAI has amended the contract, largely in response to public criticism. In an X [post](https://x.com/sama/status/2028640354912923739?s=46) on Sunday, OpenAI’s Altman seemed to suggest that the agreement stipulates that the company would not permit its technology to be used for domestic surveillance purposes, yet in the very same document, there are specifications for which laws need to be followed when using their AI models for surveillance. In various social media posts over the weekend, Open AI’s head of national security partnerships Katrina Mulligan said that the current contract [excludes](https://x.com/natseckatrina/status/2028663223432474969?s=46) defense intelligence agencies such as NSA, DIA, NGA, NRO, and DCSA, but added that they [would like to work with NSA](https://x.com/natseckatrina/status/2028869261578453024?s=46) in the future. (The wording is weird there; to me, excluding agencies from the contract would mean that the restrictions don’t apply to them, not that the technology can’t be used for their purposes. But I am not a lawyer.) None of these contract negotiations have touched on the distinction between AI-powered collection authorities and AI-powered inference capabilities. Additionally, despite positioning legality as the foundation of the contract, it appears that in the rush to sign something, OpenAI didn’t consider a lot of the legalities until afterwards. According to social media posts the company made over the weekend, the contract will supposedly [lock in the current law](https://x.com/jeremyphoward/status/2028556035183759719?s=46), so that their AI models will abide by the law as it exists today ([at the time the contract was signed](https://x.com/natseckatrina/status/2027908878952722693?s=46)), even if it changes in the future. Lawyers who’ve reviewed the document seem [pretty confident](https://www.answer.ai/posts/2026-03-02-oai-dow-contract.html) that this is *not* actually how the contract would be interpreted if the laws do indeed change at some point, but that’s not the only problem with this clause. While clearly designed to address concerns about changes in the law that would expand surveillance powers, there is also a chance that, if Congress decides to start doing its job again, we may one day see new laws aimed at curtailing AI-assisted surveillance. But based on the contract’s intent, as stated by company executives, OpenAI would consider itself bound by the laws of 2026, and thus may resist complying with future laws aimed at limiting surveillance powers. As of Wednesday, OpenAI executives are continuing to hash out the details of their contract with the Pentagon on X. #### A Last Stand For Privacy Although the government has had all of our data for years now, they haven’t necessarily been able to maximize the amount of intelligence that could be extracted from that data. Bulk collections of data are heterogeneous in nature and characterized by ambiguous and often hidden relationships, as well as a nearly infinite number of possible variable combinations in which one or more variables can be used to make new inferences about another variable. Traditional methods of analyzing and drawing inferences from such collections have relied on expert knowledge to identify and characterize relationships in the data based on theory and explicit associations. However, this is an extremely challenging and time-consuming task when dealing with data that may contain non-linear relationships, implicit patterns and associations, or other highly complex or unusual dynamics. As a result, there remains a great deal of untapped informational value in nearly all large collections of human data. That unexploited space is effectively the last remainder of your privacy. If and when frontier AI models are integrated into military or intelligence analytic pipelines, their ability to synthesize and infer across massive datasets will become operational in ways that are difficult to predict and even harder to undo. In the context of domestic surveillance, specifically, this matters for two reasons. First, intelligence authorities often permit [incidental collection](https://documents.pclob.gov/prod/Documents/OversightReport/054417e4-9d20-427a-9850-862a6f29ac42/2023%20PCLOB%20702%20Report%20\(002\).pdf) of U.S. person information in foreign intelligence operations. AI-enhanced analytic tools increase the capacity to extract insight from those incidental collections. Second, advanced AI systems can combine open-source intelligence (OSINT), datasets from [commercial data brokers](https://www.brennancenter.org/our-work/research-reports/closing-data-broker-loophole), and lawfully obtained government records. In combination, these sources allow detailed reconstruction of individual behavior patterns without necessarily requiring additional collection authority. In the past, achieving similar insight would have required resource-intensive, targeted surveillance that might have required additional warrants or triggered legal scrutiny. If there are no changes to current law, AI would allow such inferences to be made without additional review while still remaining technically within the boundaries of what is legal. #### The Policy Blind Spot Current oversight mechanisms focus heavily on whether data collection complies with the law and relevant statutory authority. They rarely scrutinize whether certain synthesis methods or inferential outputs cross new thresholds of sensitivity. For example, the Privacy and Civil Liberties Oversight Board (PCLOB), which is tasked with ensuring that U.S. counterterrorism efforts don’t violate Americans’ civil liberties, published a major [report](https://documents.pclob.gov/prod/Documents/OversightReport/054417e4-9d20-427a-9850-862a6f29ac42/2023%20PCLOB%20702%20Report%20\(002\).pdf) in 2023 documenting how FISA warrants have been used to surveil Americans and recommending that Congress implement reforms before reauthorizing the program. The report also found that, while backdoor searches had produced huge amounts of data on American citizens, there was little evidence that the data obtained was actually useful. This could be because the information wasn’t accurate or relevant or just didn’t provide value to that specific investigation, but it could also be because, at the time, the government didn’t have the tools to extract what they wanted from it. As AI systems grow more capable, the distinction between collection and analysis becomes increasingly porous. Data that appears benign in isolation may become highly sensitive when processed at scale and combined with other sources. The ongoing controversy over military AI guardrails intersects directly with this now-murky area of the law. Model guardrails often include restrictions on surveillance-related uses and sensitive attribute inference. Weakening those guardrails could expand analytic capabilities without altering statutory collection authority, meaning that the law would remain formally intact but the informational consequences and privacy implications would shift dramatically. Based on what we’ve seen thus far, the contract between OpenAI and the DoW doesn’t directly address this blind spot at all, and none of the leadership at the company or the Pentagon have given any indication that they plan to. As the current debate over AI-powered surveillance continues, it’s important to recognize that what we’re talking about is not so much illegal surveillance in the conventional sense, but whether legal collection authorities are being paired with analytic tools that lawmakers did not anticipate when designing those laws. In that sense, the greatest threat to civil liberties may not be that AI will expand what the government can collect, but that AI will expand what the government *can know.* %% ## Colophon title:: The AI Surveillance Debate Is Missing the Most Dangerous Part type:: [[full-text]] url:: https://weaponizedspaces.substack.com/p/the-ai-surveillance-debate-is-missing date:: [[2026-03-05]] published:: 2026-03-05T00:08:19+05:30