Adapting U.S. Counterintelligence to Combat AI-Enabled Adversarial Influence Operations

Adapting U.S. Counterintelligence to Combat AI-Enabled Adversarial Influence Operations

1.    Introduction

 

    a. The rise of AI-generated disinformation poses a complex challenge for U.S. counterintelligence (CI). Adversarial states are increasingly leveraging artificial intelligence (AI) tools (from deepfake video and audio to algorithmically generated social media posts) to conduct covert influence campaigns at scale. The U.S. National Counterintelligence and Security Center has warned that foreign adversaries could use AI-crafted “deepfakes (high quality manipulated video, images, text, or audio) to shape public opinion and influence U.S. elections,” undermining public trust in candidates and institutions. In effect, AI allows hostile actors to weaponize misinformation with unprecedented speed and realism, threatening to further erode civil discourse and divide the American public. U.S. counterintelligence must therefore adapt swiftly; in doctrine, capabilities, and partnerships, to detect and disrupt these AI-enabled influence operations across social, corporate, and political domains.

 

    b. Traditional CI focused on espionage and insider threats, but today’s adversaries exploit social media, fake online personas, and synthetic media as strategic weapons. Russia, China, Iran, and other nations have all signaled intent to integrate AI into their information warfare arsenals. The U.S. Intelligence Community’s latest threat assessments underscore the urgency: for example, “Russia is using AI to create highly capable deepfakes to spread misinformation [and] conduct malign influence operations” aimed at the United States. Likewise, China’s military and security services are developing AI systems to generate deceptive content, including fake news and online personas, to advance Beijing’s geopolitical narratives. The strategic scope of this threat spans democratic elections, public opinion on foreign policy, and even corporate reputations. Adapting U.S. counterintelligence to this new landscape is not only a defensive necessity but also crucial to safeguarding democratic institutions from manipulation.

 

    c. This paper explores how U.S. counterintelligence should evolve to meet the challenge of AI-enabled influence operations. It begins by mapping the current threat landscape, detailing adversaries’ use of AI technologies like deepfakes, generative models, and algorithmic targeting in recent influence campaigns. Next, it assesses the limitations of existing U.S. CI capabilities, including legal, operational, and technical gaps, in countering AI-generated disinformation.

 

    d. The core of the paper proposes a comprehensive framework for CI adaptation along three dimensions: (a) technical innovations (e.g. deepfake detection, AI-driven analytics, industry partnerships), (b) operational reforms (training, interagency coordination, rapid response), and (c) legal/policy updates (attribution frameworks, clarified authorities, safeguards for civil liberties). We then examine models from allied countries such as the UK, Estonia, and Israel to glean comparative lessons. The role of public-private partnerships; engaging technology firms, media platforms, and civil society, is analyzed as a critical element of resilience. Finally, the conclusion synthesizes key insights and offers a strategic roadmap for modernizing U.S. counterintelligence to combat AI-powered influence threats while upholding democratic values.

 

2.    Background and Threat Landscape

 

A fabricated “deepfake” video of Ukrainian President Volodymyr Zelensky was released in 2022, falsely depicting him urging Ukrainian troops to surrender, while the authentic footage shows no such statement. This incident exemplifies how AI-generated synthetic media can be weaponized to serve adversarial narratives in conflict.

 

AI technologies have dramatically expanded the toolkit for malign influence operations. Key advances include deepfakes (highly realistic fake videos or audio), synthetic images (e.g. AI-generated profile photos), generative text models (capable of producing propaganda at scale), and sophisticated algorithmic targeting techniques. Adversarial states such as Russia, China, and Iran have each begun to deploy these tools to shape opinions and sow confusion in target societies.

 

    a. Deepfakes and Synthetic Media: Deepfakes, AI-manipulated video or audio that simulates real people, have emerged as a potent disinformation weapon. The fake Zelensky video in 2022 was one of the first high-profile examples used in warfare. While its quality was imperfect and it was quickly debunked, it demonstrated the disruptive potential of adversaries inserting fabricated messages from trusted leaders. Since then, officials warn that such AI-fabricated media could be a “tip of the iceberg” in future conflicts. Russia has invested in this area: by 2025 U.S. intelligence reported that “Russia is using AI to create highly capable deepfakes to spread misinformation, conduct malign influence operations, and stoke further fear.” Moscow has already experimented with deepfake news broadcasts; for instance, inserting AI-generated newscasters into hacked streaming channels, to push false narratives, as seen in a 2024 incident linked to an Iranian-aligned group during the Israel-Hamas conflict. China, too, has begun using synthetic media in influence campaigns: in 2023-24, pro-China networks deployed AI-generated personas as “news anchors” and fake social media profiles (with AI-created profile pictures) to disseminate propaganda and divisive content on issues like drug use and immigration. These examples illustrate a burgeoning threat: fake video or audio that is difficult for the average viewer to distinguish from reality, enabling adversaries to impersonate officials, fabricate inflammatory incidents, or otherwise manipulate perceptions on a grand scale.

 

    b. Generative Text and Bots: Beyond audiovisual media, AI-driven text generation has supercharged traditional influence techniques on social networks. Generative language models (such as GPT-style algorithms) can produce convincingly human-like text, which adversaries are using to power “bot” armies and fake online personas. This allows propaganda to be scaled up without human troll farms. Notably, researchers demonstrated that even older models (GPT-2) could be trained to mimic the style of Russia’s Internet Research Agency trolls, mass-producing divisive social media posts akin to those used to meddle in the 2016 U.S. election. Intelligence reporting indicates that Russian operatives have indeed started doing this in practice. For example, a recent DOJ indictment described a Russian-run bot farm that used AI-generated profiles on Twitter (X) to amplify pro-Kremlin narratives about the Ukraine war. While the reach of this operation was limited (on the order of thousands of followers before takedown), AI made it “over 100 times cheaper” to run than a traditional human troll farm. Similarly, Iran has leveraged AI to generate English-language content for fake news websites and phony personas that promote Tehran’s talking points. A Google security report found that in a recent period, Iran accounted for most of the AI-assisted disinformation activity among state actors. This included Iranian networks using AI-translated or AI-written “news” articles to launder propaganda through seemingly legitimate outlets. The volume, speed, and deniability of text-based disinformation have all increased thanks to AI; malicious actors can automate the creation of fake blog posts, social media comments, and even entire news sites, obfuscating their origin and intent.

 

    c. Algorithmic Targeting and Amplification: Adversaries are also weaponizing the algorithms that curate content online. AI can help identify polarizing wedge issues and tailor disinformation to specific audiences (a technique akin to microtargeted advertising). For instance, Russia and China both conduct “spamouflage” campaigns; flooding online spaces with repetitive posts or memes to drown out unfavorable information. A 2021 RAND study noted that Chinese operators used an AI-driven tactic comparable to “barrage jamming,” overwhelming the hashtag #Xinjiang on Twitter with benign posts about Xinjiang (like scenic photos and export products) to bury discussion of human rights abuses. By manipulating platform algorithms that favor trending or popular content, they artificially boosted pro-China narratives and suppressed critical content. In another case, a sprawling Chinese influence network dubbed “DRAGONBRIDGE” was found to be churning out massive quantities of spam-like posts (in multiple languages) on YouTube, Twitter, and Facebook, including some content apparently authored or enhanced by generative AI. Although much of this content was low-quality and drew little engagement, it shows Beijing’s commitment to using volume and automation to hack the attention economy. Similarly, Russia has combined automated accounts and engagement bots with its state-sponsored media to amplify preferred narratives. For example, using bot swarms to push conspiracy theories or to get hashtags trending, thereby increasing their visibility in target communities. AI algorithms may also aid adversaries in identifying “influencers” or sympathetic voices in a society and then amplifying or co-opting those voices to lend credence to their messaging. In sum, by exploiting recommendation systems and social media algorithms, state actors can maximize the impact of their influence operations, ensuring that disinformation finds, and emotionally resonates with, the most receptive audience.

 

    d. Case Studies of AI-Enabled Influence: Recent events provide a glimpse of the threat’s scale across different domains. In the political sphere, Russia, Iran, and China all attempted to meddle in the 2020 U.S. elections and beyond, with AI increasingly in play. U.S. intelligence assessments noted that as these adversaries look to future elections, they are “stepping up” influence operations and will likely lean on more AI-generated content to avoid detection. The FBI and NSA warned that deepfake propaganda, such as fabricated compromising videos of candidates, could be deployed to sway voters or cast doubt on electoral outcomes. In the geopolitical realm, Russia’s use of deepfakes during its war in Ukraine (e.g. the fake Zelensky surrender video, or a deepfake of Putin declaring peace that circulated online) exemplifies how AI propaganda can be used to try to demoralize opponents or fracture alliances. Iran, during the 2023 Israel-Hamas conflict, not only engaged in cyber-attacks, but also an influence campaign featuring a deepfake news bulletin: an IRGC-linked hacker group breached Middle Eastern TV streams and inserted a fake AI-generated newscaster spreading Iranian disinformation. Microsoft analysts called this a “fast and significant expansion” of Iran’s influence toolkit, directly attributing the operation’s sophistication to the use of AI. In the corporate domain, U.S. companies have been targeted by foreign disinformation aiming to damage their reputation or financial standing. For example, Russian influence operatives have spread false rumors to hurt stock prices or consumer trust in certain firms, and China has similarly run information campaigns against Western businesses (especially those that criticize Beijing). AI may enable these anti-corporate influence attacks to be conducted more covertly and at greater scale; via fake consumer reviews, bogus news reports generated by AI, or even deepfake audio of a CEO making damaging statements. Each of these cases underscores the evolving threat landscape: foreign actors are actively experimenting with AI to enhance their influence operations, and we should expect these techniques to grow more sophisticated with time.

 

3.    Assessment of U.S. Counterintelligence Capabilities

 

Confronted with this fast-moving threat, U.S. counterintelligence finds itself stretched across legal, operational, and technological dimensions. The current CI doctrine and toolset were not originally designed to handle mass disinformation on open online platforms. As such, there are notable limitations and vulnerabilities in the U.S. approach to AI-generated influence campaigns:

 

    a. Doctrinal and Legal Constraints: Traditionally, “counterintelligence” in the U.S. context has focused on identifying and thwarting foreign espionage, not on countering propaganda or disinformation per se. There is still no clearly established doctrine assigning primary responsibility for combating foreign influence operations that occur in the information space (especially when they intersect with protected domestic speech). Legal limitations further complicate matters; the Intelligence Community is restricted from spying on U.S. persons, and law enforcement must navigate First Amendment protections. Adversaries exploit this gray zone by injecting fake content into domestic social media ecosystems, knowing U.S. agencies must tread carefully. For instance, during election seasons the FBI can monitor and flag foreign trolling or deepfakes, but it cannot remove content or directly counter message on domestic platforms without running into legal barriers. Recent controversies have in fact made agencies more cautious: in late 2023, under public and political scrutiny, the FBI reportedly “stopped sharing information with social media companies” about foreign influence campaigns (from Russia, China, Iran) that it detected. This retrenchment was driven by fears of being accused of censorship or overreach. The outcome, however, is a reduced flow of threat intelligence to the very platforms where malign content spreads; a win for foreign propagandists. As the Department of Justice itself acknowledges, engaging social media firms is “critical” to disrupting foreign influence, yet it must be done in a manner “entirely consistent with the First Amendment,” with platforms free to act (or not) on government warnings. In practice, this means U.S. CI faces a delicate balancing act: how to quickly blunt the impact of AI-fueled disinformation while scrupulously respecting free speech and avoiding domestic political entanglement. The lack of updated legal frameworks for attribution and response to foreign AI-driven disinformation (short of what’s provided for election interference or cybercrimes) is a significant gap. There is, yet no specific law against deploying deepfakes in political campaigns, for example, nor a clear mandate for any agency to lead a counter-disinformation operation on U.S. soil. This ambiguity hampers proactive action.

 

    b. Operational Gaps and Coordination Issues: The U.S. government’s efforts to counter influence operations are fragmented across multiple entities with overlapping but not identical missions. The FBI’s Foreign Influence Task Force (FITF) focuses on foreign election interference; the Department of Homeland Security’s CISA monitors disinformation related to critical infrastructure (including elections) but has been criticized and even legally challenged for its role in flagging false content. The State Department’s Global Engagement Center (GEC) works on countering foreign propaganda abroad, not domestically. And within the Intelligence Community, a new Foreign Malign Influence Center (FMIC) was established under ODNI in 2023 to integrate analysis of foreign influence threats, but it is still maturing and primarily an intelligence fusion center rather than an action arm. This patchwork approach leads to gaps in coordination. Information sharing between agencies can be inconsistent. For example, CIA might detect a deepfake influence campaign targeting overseas audiences, but legal barriers could hinder sharing specifics with FBI if U.S. persons are involved, and vice versa. A 2021 study of U.S. counterintelligence highlighted the need for “deeper integration of CI operations across military, law enforcement, intelligence agencies, and other public and private sector entities” to meet rapidly changing threats. Currently, such integration remains a work in progress. There is no standing “rapid response team” solely dedicated to debunking or countering high-impact disinformation incidents in real time; agencies often must make ad hoc teams when a viral fake emerges. The absence of a unified command or clear playbook means responses can be slow and clumsy. By the time an interagency meeting convenes to assess a viral deepfake, the false narrative may already have spread to millions.

 

    c. Technical and Analytical Limitations: Technologically, the U.S. government’s capacity to detect and analyze AI-generated content lags the threat in some areas. While research labs (including DARPA programs) have made advances in deepfake detection algorithms, these tools are not yet foolproof or widely deployed in operational settings. Sophisticated actors can train deepfakes to evade automated detectors, and even when detection works, attributing the source (tying a fake video or a bot network to a foreign government) is a formidable challenge. Open-source analysts and AI companies often identify fake personas or videos before the government does, thanks to their access to big data from social platforms. U.S. counterintelligence units do employ talented OSINT (open-source intelligence) analysts, but they face data access hurdles and volume overload. Each day, vast amounts of social media data would need to be sifted to spot the tell-tale patterns of AI influence operations. Currently, much of this burden falls on private sector partners (like social media companies’ threat intelligence teams) who then pass tips to government. This reactive posture means CI may be one step behind fast-moving disinformation bursts. Another vulnerability is the lack of training and awareness about AI-enabled threats across the broader national security and corporate security communities. As one study noted, even corporate leaders often give a “blank look” when briefed on foreign malign influence (FMI) issues, indicating that counterintelligence thinking has not been internalized in the corporate sector. If mid-level CI officers or cybersecurity staff are unfamiliar with how generative AI might be leveraged in, say, a social engineering attack or an anti-brand smear campaign, they might miss warning signs. Within government, specialized analytical units for influence operations exist (e.g. at NSA, CIA’s Open-Source Enterprise, and within FBI), but expanding their AI analytics capabilities will take time and investment. There is also a risk of technological surprise: adversaries are innovating with new AI techniques (for example, using AI to generate modified memes or exploiting deepfake audio for voice phishing), and the U.S. may not anticipate these in advance. In summary, the U.S. currently faces a mismatch: adversaries have access to cutting-edge generative tools available commercially, whereas government counter-tech efforts are often mired in lengthy R&D and procurement cycles. Until U.S. counterintelligence can nimbly leverage AI for its own detection and analysis needs, it will remain partially outpaced by the threat.

 

    d. Policy and Resource Challenges: As a relatively new mission area, countering AI-enabled influence operations does not yet have a firmly established resource base or policy framework. Questions abound: Who should lead a whole-of-government response to a “killer deepfake” aimed at U.S. national security (for instance, a fake video of a U.S. president ordering military action)? What rules of engagement govern disruption of foreign propaganda infrastructure (like takedowns of bot servers or sanctioning of individuals who create deepfakes)? How can the government counter adversary disinformation without undermining public trust (which could happen if countermeasures are perceived as government censorship or propaganda themselves)? These policy questions are still being debated. The National Security Council and ODNI have begun formulating strategies that mention the threat, and there are legislative proposals urging the intelligence community to treat foreign deepfakes as an “emerging threat” priority. Yet, at present, much of the effort remains piecemeal. For example, the National Defense Authorization Act for 2024 requested reports on deepfakes, but concrete funding to CI agencies specifically for anti-disinformation programs is limited. Moreover, within the domestic context, any government attempt to directly counter-message or correct falsehoods faces political pushback (as seen in the ill-fated attempt to create a DHS Disinformation Governance Board, which was quickly disbanded amid controversy). Thus, CI professionals operate in a charged environment, where acting too slowly carries security risks, but acting too aggressively risks political backlash or civil liberties infringements. This complicates everything from hiring (e.g., bringing in more data scientists and AI experts into CI roles) to forging partnerships (some technology firms have grown wary of cooperation after facing subpoenas and criticism). Until policies catch up to clearly empower and limit CI in this domain, the U.S. response to AI-enabled influence will likely remain ad hoc and under-optimized.

 

    e. In summary, U.S. counterintelligence today is ill-equipped in certain critical ways to counter adversaries’ AI-enabled influence operations. The challenges span the strategic level (lack of defined doctrine and authorities), operational level (coordination and rapid reaction gaps), and technical level (insufficient tools and expertise). However, these are not insurmountable problems. With focused adaptation (as outlined in the next section) U.S. CI can evolve to better detect, attribute, and disrupt malicious AI-driven influence, while still guarding the principles of law and liberty that define an open society.

 

4.    Proposed Framework for CI Adaptation

 

Adapting U.S. counterintelligence to combat AI-enabled influence operations requires a multi-pronged framework encompassing technical, operational, and legal/policy measures. The response must be as innovative and agile as the threat. Below, we outline recommendations in each area:

 

    a. Technical Adaptations: AI Detection, Analytics, and Innovation

 

        (1) Invest in Advanced Detection Tools: Technological solutions are vital to spotting and filtering AI-generated false content at speed and scale. U.S. agencies should significantly invest in R&D and deployment of deepfake detection systems, synthetic text detection, and network analysis tools driven by AI. For example, algorithms that analyze subtle artifacts in videos (such as irregular eye blinking or lip-sync errors) can help flag deepfake videos in real time. The intelligence community and DARPA have already pursued programs in this vein (e.g. the Media Forensics “SemaFor” program), but these need acceleration and broad adoption. Likewise, machine learning classifiers that detect AI-written text or bot-like behavior on social media should be integrated into CI open-source monitoring. These tools can be augmented by AI-based anomaly detection, using pattern recognition to find inauthentic coordinated behavior (large clusters of new accounts pushing the same talking points, sudden surges in posts on a niche issue, etc.). Such anomalies often betray influence operations even if the content is not obviously fake. The private sector is developing many of these capabilities, so an effective approach is partnering with AI firms and platform companies to leverage their detection engines (for instance, Facebook/Meta and Google have teams and algorithms devoted to detecting coordinated fake account networks and manipulated media). By incorporating those feeds and alerts into CI workflows, the government can essentially outsource some of the heavy data crunching to industry experts. Additionally, the CI community could stand up its own automated content provenance system; tagging official media from U.S. government sources with cryptographic authenticity markers (a concept supported by the Coalition for Content Provenance and Authenticity, C2PA) so that fakes purporting to be from government officials can be quickly exposed. Embracing technical innovation, however, must go together with acknowledging limits: no detector is perfect. Thus, technical efforts should be paired with human analytic triage (skilled analysts to review flagged items) and with public education on not taking every video at face value.

 

        (2) Leverage AI for CI Analysis: Just as adversaries use AI, U.S. counterintelligence should deploy AI to fight AI in the information domain. This means adopting artificial intelligence and big-data analytics to enhance our understanding of influence operations. One approach is to use natural language processing (NLP) and machine learning on the massive volumes of open-source data; scanning social media, forums, and news for signs of foreign influence. AI systems can cluster content and map narratives to see how a disinformation theme spreads and mutates across platforms. For example, an AI-driven dashboard could show in real time which hashtags or keywords known Russian troll farms are amplifying. Another application is using AI to profile and attribute threat actors: machine learning can detect writing style fingerprints or visual quirks in deepfakes that may link to a particular source or generation method, aiding attribution to a country or group. The CI agencies might also employ predictive analytics; using AI models to simulate how an influence campaign might evolve and whom it might target, which could inform preemptive counter-messaging. Importantly, adopting AI in analysis requires close collaboration with tech companies and academia to get the latest algorithms and to ensure access to training data (much of which resides with social media platforms). There should be joint initiatives, perhaps through the Intelligence Advanced Research Projects Activity (IARPA), to develop AI models specifically tuned to detect state-sponsored disinformation patterns. By being on the cutting edge of AI research, U.S. counterintelligence can turn the tables, using the same force-multiplying technology to defend truth that adversaries use to spread lies. As a 2021 RAND analysis noted, emerging technologies are a “double-edged sword,” they enable threats but also offer “significant opportunities for countering such threats” if leveraged properly.

 

        (3) Strengthen Public-Private Tech Partnerships: Given that much of the AI innovation happens in the private sector, U.S. CI should establish robust partnerships with technology firms, social media companies, and cybersecurity companies. These partnerships can facilitate information sharing about emerging threats and joint development of countermeasures. For instance, if Twitter/X or Meta detects an AI-driven bot network tied to a foreign actor, there should be seamless channels to alert FBI or ODNI’s FMIC, and vice versa. Embedding liaisons or technology fellows from companies into government fusion centers (and government experts into companies) could improve mutual understanding. In the lead-up to elections, public-private task forces might be formed to identify deepfakes or algorithmic manipulation targeting voters, so that CI can provide threat intel and companies can handle content moderation. There is precedent in collaborative success: during the 2020 election cycle, platforms and U.S. agencies worked together to take down fake Iranian accounts impersonating American voters. However, these efforts were informal; a more structured partnership would institutionalize success. One concrete proposal is to create a joint AI Disinformation Lab where government, industry, and university researchers share data and tools to test detection capabilities against simulated adversary tactics. Another area of partnership is in developing standards; for example, working with AI developers to implement watermarking of AI-generated content or content authentication features that make it easier to trace and flag synthetic media. The White House took a step in mid-2023 by securing voluntary pledges from leading AI companies (such as Google, Meta, OpenAI) to develop such watermarking and to address AI risks including disinformation. U.S. counterintelligence should build on this momentum, treating industry as an essential ally in the fight. Notably, these partnerships need to navigate trust issues on both sides: tech firms worry about user privacy and being seen as agents of the state, while government worries about data access and companies’ profit motives. Clear legal frameworks (discussed below) and transparency in these collaborations can help. In summary, by creating a tech ecosystem of shared vigilance, where innovations in detection rapidly flow to those on the frontlines, the U.S. can keep pace with adversaries’ AI tools.

 

        (4) Rapid Response and Communications: From a technology perspective, speed is critical. The U.S. should establish a capability for rapid analysis and public notification when a malicious deepfake or disinformation surge is detected. This might involve setting up a 24/7 “disinformation Quick Reaction Team (QRT)” that can, within hours of a high-impact fake emerging, do the forensic analysis (with the help of AI tools as above) and validate or debunk the content. The team would then coordinate with relevant agencies to put out an official statement or assist in getting the platform to label/remove the content. Currently, responses can take days, by which time a lie has spread widely. A model to emulate is the National Cybersecurity Rapid Response Teams used in cyber incidents; a similar concept could be applied to information incidents. Technologically, this means having pre-configured analytical pipelines (for example, if a suspect video surfaces, automated tools analyze it against known footage, check for deepfake artifacts, and query intelligence databases for any related activity). It also means having communication tools ready: secure channels to brief social media companies, as well as public communication outlets (like an official “rumor control” website or social media account that posts debunkings). During the 2020 election, CISA ran a “Rumor Control” webpage to dispel common misinformation about voting. A more advanced version could be jointly run by CI agencies and employ interactive content (even AI-chatbots to answer public queries about potential fakes). The goal is to inoculate the public quickly once a false AI-generated story breaks, by flooding the zone with accurate information; effectively conducting counter-influence operations. Technically enabling this requires not only detection, but also effective attribution tools (to quickly identify if a piece of content likely originated from a foreign source). For example, if a deepfake video of a U.S. general appears, being able to trace digital fingerprints (like metadata or known AI model signatures) might reveal it was produced using a tool previously tied to a Russian campaign, lending confidence to expose it as foreign. All these technical measures together, detection, AI-assisted analysis, partnerships, and rapid response infrastructure, would significantly raise the U.S. CI posture against AI-enabled disinformation.

 

    b. Operational Adaptations: Training, Interagency Coordination, and Strategy

 

        (1) Specialized Training and Expertise: The counterintelligence workforce needs new skills and knowledge to confront AI-driven influence operations. This means expanding training for intelligence analysts, investigators, and even diplomats on the nuances of modern disinformation tactics and AI technologies. CI staff should learn the basics of how deepfakes are made, how bots operate, and what technical and behavioral indicators can reveal them. Incorporating scenario-based training (such as war-gaming a deepfake incident or a viral disinformation campaign) would help officers practice decision-making in these novel situations. The U.S. could establish a dedicated Influence Operations Training Program within agencies like FBI and CIA, possibly leveraging experts from academia or the tech sector to deliver short courses on topics like “AI and cognitive hacking” or “open-source social media analysis.” Furthermore, hiring and talent management should adapt to bring in or develop the needed expertise. This might involve recruiting data scientists or those with tech industry backgrounds into CI roles, as well as creating career tracks that blend classical counterintelligence with information operations skill sets. Each FBI field office’s counterintelligence squad, for example, might benefit from having a designated social media and AI specialist. On a broader scale, the Office of the Director of National Intelligence (ODNI) could sponsor an AI Fellowship program that places technologists in various CI components to help modernize analytic tools and mentor staff. Beyond technical skills, training should also cover cultural and cognitive aspects; understanding how foreign influence campaigns aim to exploit societal divides and learning how to craft counter-narratives that can resonate without appearing like propaganda. The more U.S. CI personnel internalize the reality that “influence is the new frontier of conflict,” the better prepared they will be to take proactive measures. In essence, human capital is as important as technical capital in this fight: smart tools are useless without savvy operators. Building a cohort of CI professionals fluent in both national security and AI/information science is a foundational operational step.

 

        (2) Interagency Coordination and Structure: Combating influence operations is inherently a cross-agency challenge, so improving coordination is paramount. The U.S. should create clearer structures for joint action against foreign disinformation. One recommendation is to elevate the mission by perhaps establishing a permanent Interagency Center or Task Force specifically for Counter-Influence Operations (like the National Counterterrorism Center but for disinformation). This center, possibly under ODNI or the National Security Council, would fuse intelligence from all sources (NSA’s signals intelligence, CIA’s human intelligence, FBI and DHS’s domestic findings, State’s GEC analyses) related to foreign influence campaigns. It would provide a common operating picture and ensure that when a major incident occurs, responses are unified. The Foreign Malign Influence Center (FMIC) set up at ODNI is a starting point, but it needs to be resourced and empowered to not just analyze but also coordinate operations (within legal bounds). Regular interagency meetings and exercises should be conducted to iron out roles: for example, if a deepfake targets U.S. military personnel, DoD and FBI coordinate on messaging; if election disinformation arises, FBI, CISA, and state/local authorities sync up through an existing Election Security Task Force. Protocols for information-sharing should be updated so that critical data (like platform reports of an influence campaign) can be quickly disseminated to all relevant players without bureaucratic delay. Improved coordination also means clarifying leadership for different scenarios. Perhaps through a national strategy that says, for instance, the FBI leads response to foreign influence activity inside the U.S., while the State Department (GEC) leads outside the U.S., and DHS/CISA coordinates with private sector infrastructure. War games and after-action reviews can be used to continuously refine these relationships. Additionally, establishing joint teams on the ground can help; for example, before and during the 2024 elections, the FBI embedded personnel in social media companies and stood up command posts that included analysts from NSA, CISA, and others to handle threat information. Institutionalizing such joint efforts for every major national event (elections, summits, crises) would improve agility. The overarching goal is that U.S. counterintelligence operates as a cohesive enterprise against influence operations, rather than siloed agencies each seeing a piece of the puzzle. As experts have argued, a “fundamental transformation” is needed to break down compartmentalization and foster integration in CI. That transformation must be organizational as much as technological.

 

        (3) Rapid Response and Playbooks: In the operational realm, speed and clarity of response can make the difference in blunting an influence attack. U.S. CI should develop playbooks and standing plans for various disinformation contingencies. For example, a ‘Deepfake Crisis Protocol’ might specify who verifies authenticity, how intelligence is gathered on its origin, which agency leads the public response, and how platforms and allies are engaged. Pre-planning these scenarios will save precious time when under live fire. Similarly, playbooks for election interference (building on what was done in 2018 and 2020), for corporate disinformation attacks, or for foreign propaganda during military conflicts should be crafted. These should include communication strategies, e.g., using authoritative voices to counter false narratives. On a tactical level, establishing rapid reaction teams (as noted in the technical section) will only work if they fit into an operational plan that links detection to decision to action within hours. Therefore, CI leadership must ensure there are clear thresholds and delegations: at what point do you call something a state-sponsored influence operation? Who has authority to notify the public or sanction the perpetrators? Deciding these in advance, and exercising them, will make responses more confident and effective. A related piece is engagement with victims or targets of disinformation. Operational protocols should involve reaching out to those who are being impersonated or maligned by foreign fakes, whether it’s a political candidate, a company CEO, or a community group, to assist them in responding. The FBI, for instance, as part of its mission to protect U.S. persons, could extend its foreign influence warnings to include notifying individuals who are the subject of deepfake smear campaigns, much as it notifies hacking targets. Internally, a culture of agility and preemption should be nurtured: CI should not only react to influence operations but also consider proactive measures (like quietly flagging to a social media platform that a certain emerging narrative looks inauthentic, before it fully trends). Overall, making rapid response a core competency requires practice, through drills and real-world iteration; until the relevant agencies can act almost reflexively in the first critical moments of a disinformation flashpoint.

        (4) Integration with Broader National Strategy: Countering AI-enhanced influence operations should be embedded in the broader U.S. national security strategy and foreign policy. Operationally, this means aligning CI efforts with diplomatic and military initiatives. For example, when confronting adversary influence campaigns overseas (like Russian disinformation in Europe or Chinese propaganda in Asia), U.S. CI elements can support State Department public diplomacy and allied governments by sharing intelligence and debunking malign efforts. We should work with NATO and other allies on joint counter-disinformation exercises and perhaps coordinate messaging responses for major propaganda events (for instance, NATO might join the U.S. in exposing a Russian deepfake so it’s a unified front). Domestically, integrating into broader strategy means connecting the dots between influence operations and other threats. For instance, if a cyber breach is accompanied by an information leak (as Russian hackers often do), cyber units and CI units need to collaborate closely. The National Counterintelligence Strategy (2020–2022) acknowledged “foreign influence” as a key pillar, but future iterations should explicitly address AI and deepfakes, providing high-level guidance that trickles down to agency implementation. Finally, the U.S. counterintelligence mindset must evolve; protecting secrets now also means safeguarding the integrity of information in the public domain. This requires accepting some risks and moving beyond a purely reactive stance. CI units might at times engage in offensive counter-influence operations; for instance, feeding deceptive data to adversary AI systems or disrupting the command and control of a bot network, under proper authorities. Incorporating these possibilities into national covert action frameworks or military information operations (IO) doctrine will ensure that, when necessary, the U.S. can go on the offensive against the worst actors. Aligning CI with the larger strategic context ensures resources and attention flow to the problem and that efforts are synergized rather than isolated.

    c. Legal and Policy Adaptations: Frameworks, Authorities, and Liberties

 

        (1) Update Legal Frameworks for Attribution and Response: One of the thorniest challenges is how to legally address AI-enabled influence attacks that fall in a gray zone between crime, covert influence, and protected speech. The U.S. should develop clearer statutory and regulatory tools to tackle this threat. First, establishing a framework for attribution is key; perhaps like how cyber-attacks are handled, where intelligence assessments attributing state responsibility can trigger sanctions or indictments. If a foreign actor is clearly tied to a malicious deepfake or disinformation campaign (for example, via digital forensics or source intelligence), there should be a mechanism to publicly name and shame the culprit and impose consequences. This could involve expanding the use of Executive Orders that sanction foreign individuals or entities for engaging in election interference and disinformation (some orders already exist but could be refined to explicitly include the use of AI deepfakes in malign influence). On the criminal side, laws might need updating: the U.S. could consider a federal offense for knowingly creating or disseminating synthetic media impersonating a U.S. official or candidate with intent to mislead. Some states have begun passing laws against certain uses of deepfakes (e.g. in elections or non-consensual pornography), but a consistent federal approach would strengthen deterrence. Additionally, authorities for agencies may need adjustment. The FBI and other security agencies likely require explicit authorization to monitor foreign influence activity on social media (within constitutional limits) and to conduct influence-focused investigations (which traditionally might not have been a priority without espionage or cyber elements). Clarifying these authorities in law, including oversight provisions to prevent abuse, can empower CI officers to pursue leads on AI-driven disinformation with confidence. Congress has shown interest: multiple bills have been proposed to examine deepfake threats and require the DNI to report on foreign deepfake campaigns. This legislative attention should translate into actionable legal provisions enabling agile response.

 

        (2) Refine Policy on Engagement with Platforms: As noted, partnerships with social media and tech firms are essential but fraught. Policymakers should create transparent guidelines for government interaction with online platforms regarding content moderation of foreign-sourced disinformation. The Department of Justice’s published principles on Foreign Malign Influence engagement (which emphasize that sharing threat info with companies is critical, and that any action is voluntary for the platform) are a good starting point. These should be formalized perhaps into something like a Code of Conduct for public-private collaboration on disinformation: the government provides timely threat intelligence; platforms decide and implement moderation; and both respect users’ rights. Such guidelines could help insulate CI agencies from accusations of “censorship by proxy” by clearly delineating roles. Moreover, establishing a legal safe harbor for companies to share data with government about influence operations could alleviate concerns over privacy lawsuits or regulatory penalties. For example, an amendment to the Stored Communications Act or a new provision in law could permit social media companies to share certain anonymized or technical information (IP addresses of bot networks, metadata of deepfake videos) with designated authorities without violating user privacy agreements, when done to investigate foreign interference. At the same time, policies must ensure robust civil liberty protections. Independent oversight (by inspectors general or perhaps a public-private audit board) can monitor that CI interactions with platforms do not stray into trying to suppress domestic viewpoints or legitimate dissent. Any content-based action must focus on foreign malicious actors, not Americans exercising free speech, even if that speech echoes disinformation. This distinction can blur in practice (since foreign narratives often get picked up domestically), but the policy should err on the side of protecting expression. In essence, the U.S. needs to formalize how democracy defends itself in the information domain without undermining the values of transparency and free debate. A clear policy framework, debated openly, will help build public trust in CI efforts; inoculating against the criticism that counter-disinformation work is just “censorship” in disguise.

 

        (3) Ensuring Protection of Civil Liberties: Any expansion of counterintelligence activity into the informational realm raises valid concerns about government overreach. The strategy for adaptation must therefore bake in safeguards to protect privacy and civil liberties. This includes stringent rules on the collection, retention, and use of data related to U.S. persons. For instance, if CI units are monitoring social media for foreign bots, they will inevitably encounter communications by Americans. Policies (and training) should reinforce that analysts cannot pursue or catalog information solely about U.S. persons’ beliefs or speech, unless it’s germane to a foreign threat case and consistent with FBI/ODNI guidelines. Moreover, measures like the DOJ’s guideline that “it is up to the companies whether to take action on content; no coercion by government” must be adhered to in practice. Building rigorous oversight mechanisms can help: Congress should oversee the CI influence operations mission as it does other intelligence missions, and perhaps new reporting requirements can be instituted (e.g., an annual public report by ODNI on foreign influence activities and the U.S. response, with unclassified metrics that show the scope of the problem and actions taken). The U.S. can also consult external watchdogs; for example, engaging civil society groups focused on digital rights in the policy development process. An independent advisory board on counter-disinformation could be one idea, bringing civil libertarians and technology experts to review initiatives for any rights implications. Additionally, the government should be transparent with the public about the threat and its approach. When possible, declassifying and releasing information on foreign influence campaigns (as was done with some FBI bulletins about Russian disinformation during elections) lets citizens themselves be aware and vigilant, reducing reliance on secretive actions. A key principle is defensive influence operations at home should rely on truth and exposure, not censorship. U.S. CI might counter a lie with a loud truth, or with disruption of foreign networks, but it should not be in the business of policing political speech. This principle must be enshrined so that efforts focus on the source of malign influence (the foreign actor) rather than on the content message per se. Internationally, the U.S. can even lead by example by championing norms against certain malicious uses of AI. For instance, it could push for global agreements that state using deepfakes to spread harmful misinformation is unacceptable state behavior; akin to existing norms in cyber warfare. While enforcement is tricky, articulating these norms reinforces the U.S. commitment to maintaining an information environment where truth can triumph without draconian control.

 

        (4) Enhanced Legal Authorities for Disruption: Currently, the U.S. government’s options for directly disrupting foreign influence infrastructure are limited and often cumbersome. To more effectively combat AI-enabled operations, policymakers should explore giving agencies tailored authorities to act against the technological underpinnings of disinformation campaigns. For example, if a server farm abroad is identified as controlling an army of AI social media bots targeting Americans, can the U.S. legally take it down or block its traffic? Cyber Command and NSA have some mandate to engage foreign cyber operators (as seen in operations to mute Russian troll farms during elections), but clearer authorization could be useful. An update to warfighting doctrines (like DoD’s concept of “Information Operations”) might explicitly include preemptive cyber or influence actions against imminent disinformation attacks. On the law enforcement side, enabling faster takedown actions through courts against foreign disinformation websites or fraudulent domains (perhaps by treating them like malware sites) could disrupt operations. Legislative changes could also facilitate sanctions and prosecutions: expanding the Foreign Agents Registration Act (FARA) or related statutes to cover those who knowingly disseminate deepfake propaganda on behalf of foreign powers could empower DOJ to indict proxy actors (like the two Iranians charged in 2020 for a cyber-enabled disinformation scheme to intimidate voters). While such cases may often be symbolic (as the persons are overseas), they help spotlight the perpetrators and deter others. Any expanded authority must be paired with oversight to prevent misuse, but thoughtfully crafted tools can significantly raise the cost for adversaries. In short, U.S. counterintelligence and security agencies need a flexible legal toolbox to go beyond passively observing influence operations; they should have options to disrupt and deter those operations at their source. Whether through cyber means (shutting down botnets, sanctioning AI software providers aiding adversaries) or through legal injunctions (ordering removal of obviously fake videos impersonating government officials, under narrowly defined criteria), a proactive stance is needed. Crafting these authorities will require collaboration between Congress, the Intelligence Community, and legal experts to thread the needle between efficacy and liberty. Done right, it will signal to foreign adversaries that there is a price to pay for assaulting our democracy with AI-enabled lies.

 

5.    Allied Models and Comparative Lessons

 

The United States is not alone in facing AI-fueled influence operations. Allied nations and partner democracies have been grappling with similar threats, often from the same state actors. Examining how some of these countries have responded provides valuable lessons and potential models for U.S. counterintelligence adaptation. In particular, the experiences of the United Kingdom, Estonia and other Baltic states, and Israel (among others) offer insights into building resilience and strategic responses.

 

    a. United Kingdom: The UK has taken a multifaceted approach to counter disinformation and could serve as a close comparison for U.S. efforts. At a policy level, the UK government emphasizes societal resilience and has invested in both defensive and offensive capabilities. One notable aspect is the role of the UK’s regulatory and legal framework in deterring false information. UK law mandates impartiality in broadcast news. For example, the Broadcasting Act of 1990 requires accuracy in news broadcasts, and the Office of Communications has penalized outlets (like Russia’s RT network) for spreading disinformation. This shows a willingness to enforce standards that indirectly counter state propaganda on British airwaves. While the U.S. has stronger free speech protections and no equivalent broadcast fairness doctrine for cable/online media, the UK example illustrates how holding outlets accountable for blatant falsehoods can raise the cost for foreign disinformation actors (RT eventually lost its UK license after repeated violations). Operationally, the UK has reportedly developed specialized military units (such as the British Army’s 77th Brigade) focusing on information operations; both to counter adversaries and to conduct strategic communications. These units blend reservists with tech and linguistics skills and have been involved in countering extremist propaganda and foreign influence messaging online. The UK also stood up a cross-government Counter Disinformation Cell to address COVID-19 misinformation and election-related falsehoods, demonstrating agile interagency coordination. Funding has been directed toward civil society projects and research to counter disinformation (an investigative report indicated over £25 million was devoted to such efforts by the Foreign Office since 2018). One lesson from the UK is the importance of an offensive strategy: officials have hinted at capabilities to identify and “call out” hostile information campaigns quickly. In late 2023, for example, when there were fears of a “deepfake election,” UK agencies actively worked with social media companies and political parties to prepare mitigation plans, which likely contributed to the threat not materializing in a significant way. The UK’s approach underlines the benefit of government-led initiatives combined with regulation and close platform cooperation. For the U.S., adopting certain UK practices, like funding independent fact-checkers and media literacy, or exploring narrow regulations on imposter content in election contexts, could strengthen overall resilience.

 

    b. Estonia and the Baltic States: Estonia offers a powerful case study in building national immunity to disinformation. As a small country that suffered a major Russian cyber and influence attack in 2007 (when a disagreement over a war monument triggered riots amplified by Russian misinformation), Estonia responded by treating information security as a pillar of national security. A standout aspect is Estonia’s focus on education and media literacy as a long-term solution. Starting in 2010, Estonia integrated media literacy into its national school curriculum at all levels, from kindergarten through high school. By high school, students take mandatory courses on “media and influence,” learning how to critically evaluate online information. As one Estonian official put it, media literacy is now considered “as important as math or writing or reading” in the country. The result is a populace better equipped to spot fake news and less likely to be swayed by malicious rumors. The U.S. can draw from this by supporting digital literacy initiatives domestically (though our education system is decentralized, federal grants or programs could encourage inclusion of critical thinking about online content in curricula nationwide). Another key element in Estonia’s model is whole-of-society engagement. In 2016, anticipating interference in its own elections, Estonia’s State Electoral Office convened an interagency task force specifically to counter disinformation. This network included not just government agencies but also social media companies, civil society groups, and the press. They collaboratively monitored for false narratives and worked with media to correct them. This proactive network successfully blunted foreign interference during elections. It wasn’t without challenges; Estonia found that free speech concerns limited its ability to tackle domestic political disinformation spread by local actors, highlighting that democracies must be careful to target the foreign source rather than censor internal debate. But overall, Estonia (along with Latvia and Lithuania, who have similar experiences with Russian info-war tactics) demonstrates the value of grassroots resilience and quick coordination. They also leverage international cooperation: Estonia hosts NATO’s Cooperative Cyber Defense Center of Excellence (CCDCOE) and, along with neighbors, contributes to the NATO StratCom Center of Excellence, which conducts research and exposes disinformation campaigns. The lesson for the U.S. is the importance of public preparedness, a citizenry that can resist manipulation, and the use of agile multi-stakeholder coalitions to counter falsehoods in real time. U.S. counterintelligence could foster something similar by building partnerships with journalists and NGO fact-checkers domestically, so that when an influence operation is detected, there’s a network ready to amplify truth and context to the public.

 

    c. Israel (and Other Experienced States): Israel provides an interesting perspective as a nation frequently targeted by influence and propaganda efforts, especially by Iran and its proxies. In recent conflicts (such as confrontations with Hamas and Hezbollah), Israel has faced waves of disinformation on social media aiming to sway international opinion or incite unrest. The Israeli government and military have refined a strategy of rapid factual transparency to counteract false narratives. For example, during the May 2021 Israel-Hamas conflict, Israeli officials quickly debunked fabricated images and deepfake-style audio clips (circulated by Iran-linked groups) by releasing evidence and clarifications through official channels and sympathetic media. An Israeli military analysis stressed “the truth, the whole truth, and nothing but the truth” as their guiding principle in information operations; essentially countering adversary propaganda by flooding accurate information and corrections. Israel also employs specialized intelligence units (such as the IDF’s Unit 8200 and others) that monitor online trends and coordinate with tech platforms to remove posts that pose immediate dangers (e.g., incitements to violence based on false rumors). Another notable aspect is how Israel leverages alliances; it works closely with the U.S. and European governments to share intel on Iranian disinformation networks. For instance, Microsoft reported in 2023 that Iranian influence campaigns were expanding in scope using AI, and this assessment was likely informed by Israeli cyber-intel cooperation given the common target set. Additionally, Israel’s population, much like Estonia’s, has developed a skepticism toward obvious propaganda, given their long exposure to it. A potential lesson from Israel is the effectiveness of offensive defense; they often quickly attribute influence campaigns to their source (e.g., pointing out “this video is coming from Iran’s IRGC apparatus”), thereby undercutting its credibility. The U.S. could incorporate similar tactics by more frequently declassifying and revealing who is behind major disinformation efforts, relying on the credibility of U.S. intelligence to persuade the global public. Beyond Israel, other countries like Finland have also earned praise (Finland consistently ranks high in resilience to fake news, due in part to public education and a strong public broadcasting ethos). Taiwan is another example; facing an onslaught of Chinese disinformation, Taiwan has innovated with real-time crowd-sourced fact-checking and government “meme teams” that put out humorous but factual rebuttals to viral falsehoods, thereby capturing public attention more effectively than dry press releases. These international experiences underscore that an effective counter-disinformation strategy blends government action with civil society and sometimes a bit of creative communication. The U.S., by studying these models, can adapt ideas such as national media literacy campaigns (Finland’s approach), nimble fact-checking alliances (Taiwan’s approach), and strategic public attribution of adversaries (Israel’s approach) into its own playbook.

 

    d. Common Threads and NATO/EU Cooperation: Across many allies, two common themes are coalition-building and democratic values. European nations and NATO structures have emphasized that countering disinformation is not a task for government alone; it requires a coalition of journalists, scholars, technology firms, and citizens. Multi-national efforts, such as the EU’s East StratCom Task Force (which runs the EUvsDisinfo database of Russian disinformation cases), show how sharing information across borders can help everyone recognize repeat patterns and narratives. NATO’s StratCom Centre of Excellence has conducted joint research (for example, on deepfake detection and the impact of social media manipulation) and shares best practices among member states. The U.S. actively participates in these environments but could do more to incorporate their findings and contribute resources. Another notable allied practice is setting red lines: some countries have signaled that if a foreign power uses deepfakes to, say, fabricate statements by their leaders, it would be considered a serious national security threat potentially warranting retaliation (Estonia and France have both hinted at this in policy papers). Making such declarative policy can itself be a deterrent. Finally, allies have found that reinforcing positive narratives and truthful information is as important as countering falsehoods. For instance, Canada and the UK during the pandemic focused on pushing out authoritative health information to drown out anti-vaccine propaganda. The U.S. might similarly frame its CI mission not just as “stopping bad info” but also “promoting good info;” working with trusted voices in communities to ensure factual narratives prevail. In conclusion, the experiences of allies reveal a rich set of tools: education, regulation, cross-sector networks, intelligence sharing, strategic communication, and norm-setting. U.S. counterintelligence, by learning from these and tailoring them to American contexts, can craft a more robust approach that is both effective and consistent with democratic norms.

 

6.    Public-Private Partnerships in Counter-Influence Operations

 

A recurring theme in countering AI-enabled influence operations is the indispensable role of public-private partnerships. The government cannot tackle this threat alone, because much of the battleground is owned and operated by private sector entities (social media platforms, messaging services, internet infrastructure) and the innovation in AI happens largely in commercial spheres. Therefore, forging strong collaboration between U.S. counterintelligence agencies and the private sector (tech companies, media organizations, and civil society) is a cornerstone of any successful strategy.

 

    a. Tech Firms and Social Media Platforms: These actors are on the front lines where disinformation is disseminated and amplified. They also often possess the technical tools and data needed to detect and take down malicious content. The partnership between government and platforms needs to be two-way and trust based. On one hand, government (through agencies like FBI FITF or DHS) can share intelligence about foreign influence operatives. For instance, accounts known to be controlled by a foreign troll farm, or indicators of an impending deepfake campaign; so that platforms can act on their own terms. On the other hand, platforms can share anonymized insights and early warnings with the government. A successful example of this was during the 2018-2020 election period when Facebook, Twitter, and Google regularly briefed federal agencies on disinformation trends they observed, leading to joint press announcements outing foreign influence campaigns (e.g., Facebook and FBI coordinated on revealing a Russian fake account network in October 2020). To institutionalize this, the U.S. government could create a joint analytics center where platform representatives and government analysts sit side by side (physically or virtually) during critical times (elections, crises) to swap information on emerging threats. Some of this was done informally in 2020 under the umbrella of the Foreign Influence Task Force and the Election Integrity Partnership (a multi-stakeholder collaborative), but formal frameworks can make it endure. The recent retreat of some formal cooperation (due to legal challenges and political controversy) must be addressed; clear legal safe harbors and transparency can reassure platforms that cooperating on foreign threat information won’t lead to liability. The DOJ’s guidance explicitly stating that protecting online free expression is part of the mission is crucial to communicate. Additionally, tech firms themselves have taken initiative on AI and election integrity. In 2023, twenty major tech companies signed a voluntary pact to counter the use of AI for disinformation in elections. This included commitments to develop better deepfake detection, label synthetic media, and share best practices. The government should encourage and support these efforts, possibly convening a regular summit or working group with these companies to track progress and challenges. Ultimately, the public-private equation with platforms is about combining the government’s intelligence on threat actors with the platforms’ visibility and control over their networks. Together, they can achieve what neither can alone: quickly identifying foreign influence operations and implementing mitigation (content removal, algorithm adjustments, user warnings, etc.) before the disinformation achieves its intended effect.

 

    b. Media Organizations and Journalists: The news media and fact-checking organizations play a vital role in exposing and correcting disinformation. Partnering with them can amplify counterintelligence efforts. For instance, when U.S. intelligence uncovers a major influence campaign, working with investigative journalists or reputable outlets to publicly attribute and explain the campaign can inform the public in a credible way. This was done to some extent with the FBI and ODNI public service announcements about election interference, which media then reported on. Going further, CI agencies could provide briefings to editorial boards or journalism associations about emerging deepfake threats, so newsrooms are prepared to scrutinize suspicious videos or audio that might land in their inboxes (e.g., a deepfake “leak” of a politician). Media can also help in pre-bunking; a strategy of warning audiences about specific false narratives before they spread. For example, if intelligence suggests adversaries will push a narrative that a certain vaccine is harmful or a certain election is “rigged,” CI could quietly tip off health or political reporters to watch for these themes, so that accurate stories debunking them can run early. Another partnership aspect is leveraging the growing fact-checking community. Organizations like Snopes, PolitiFact, Bellingcat, and numerous university labs continuously monitor and debunk false claims (many now aided by AI tools themselves). Government could, without controlling them, facilitate a network where these fact-checkers receive relevant unclassified intelligence or leads. Perhaps DHS or State (through the GEC) can fund grants that support fact-checking of foreign propaganda in multiple languages, including developing AI systems that help track the provenance of viral images or video (like Getty Images or Reuters doing deepfake forensics on suspect footage coming out of conflict zones). It’s important that government support for media and fact-checkers remains non-partisan and arms-length to maintain credibility; lessons from some European efforts show that over-enthusiastic government messaging can backfire if the public suspect’s bias. The key is enabling independent media to do what they do best: shine light on the truth. From the CI perspective, journalists can be seen as force multipliers; one well-researched article in a major newspaper dissecting a foreign influence campaign can undermine that campaign far more effectively than any classified report. Public-private partnership here means opening channels (with appropriate security) for information flow to those who inform the public.

 

    c. Civil Society and Academia: Beyond media, a range of civil society actors contribute to resilience against disinformation. Academics study the tactics and psychology of influence campaigns, NGOs run digital literacy workshops, and community organizations can help disseminate information at the grassroots. U.S. counterintelligence should engage and support these allies. For instance, the intelligence community could declassify portions of analysis about how foreign propaganda is targeting specific communities (such as military families, diaspora groups, etc.) and share that with community leaders so they are aware and can act. We see a model of this in countries like Sweden, where authorities regularly inform local governments and even high school teachers about disinformation themes to watch for. The U.S. might convene a public-private council on disinformation that includes representatives from academia, think tanks (like RAND, Brookings, Atlantic Council, etc.), and non-profits focused on election integrity, with CI officials as liaisons. Such a council could advise on ethical and effective strategies, evaluate the impact of current efforts, and serve as a communication bridge to the public. One example of an academic contribution is the development of datasets and challenges for deepfake detection; academia (often funded by DARPA or NSF grants) created large libraries of deepfake videos that helped spark improvements in detection algorithms via competitions. Government should continue to fund and encourage open research in this space. Another area is psychological research on why people believe and share false information; academic insights here can guide CI messaging to be more persuasive. Civil society can also help guard against overreach. Organizations like the ACLU or Knight First Amendment Institute, while sometimes critical, can provide feedback ensuring that counter-disinformation measures don’t violate rights. Including them in dialogue (if not operational planning) adds legitimacy and balance. Lastly, empowering community-level responses is important: for example, training local election officials and librarians to identify and respond to malign misinformation in their spheres. During the 2020 election, CISA engaged state and local officials with tools to debunk voting rumors; this should continue and expand, with CI contributing intelligence about foreign sources targeting those localities. In sum, the social fabric of America (schools, non-profits, religious institutions, tech literacy groups) all have roles in immunizing the populace against falsehoods. A public-private partnership approach would treat these actors as part of an extended defense network, where government shares information and resources while they share on-the-ground knowledge and trust with communities.

 

    d. Addressing Corporate and Economic Targets: Many U.S. corporations (energy firms, tech companies, even airlines) have been subjects of foreign influence or smear campaigns, sometimes to gain competitive advantage or punish them for political reasons (e.g., Russian media spreading disinformation about a U.S. food company to reduce its market share). Corporate security officers and CI officers should establish channels to share threat information. The National Counterintelligence and Security Center (NCSC) has begun programs to brief industry leaders on foreign intelligence threats; this should expand to cover disinformation and AI threats specifically. As one CSIS study found, “corporate leaders need to learn about counterintelligence” in the context of influence operations. The government could create a corporate counter-disinformation toolkit for companies, which might include best practices like monitoring social media for fake reviews or impostor social accounts, steps to take if your CEO is deep faked, etc. In turn, companies (especially social media advertisers and data firms) can feed insights to government about how adversaries may be manipulating the information environment affecting businesses or markets. Public-private partnership in the corporate realm also intersects with financial regulation; for instance, the SEC and Treasury might coordinate with CI to watch for influence-driven stock manipulation schemes (if a state actor uses disinformation to short a stock, that’s both a market issue and national security issue). By enlisting the private sector as both a sensor and responder, U.S. counterintelligence can cover much more ground.

 

    e. In conclusion, collaboration with private stakeholders is not just beneficial but arguably the linchpin in countering AI-enabled influence threats. The information space is largely privately owned, and innovation moves at the speed of the private sector. Government brings vital authorities, intelligence, and convening power; private entities bring agility, technical prowess, and proximity to the problem. A modernized approach would formalize these partnerships while respecting boundaries (the government facilitates and informs but does not dictate platform policies or journalistic content). When these sectors work in concert, foreign dis-informers find it much harder to slip through the cracks; a malicious fake video might be caught by a tech platform’s AI, flagged to the FBI, analyzed by a think tank, debunked by a news outlet, and understood as false by a citizen who saw the debunking. Such an ecosystem of shared awareness and action exponentially increases resilience. Therefore, a strategic priority for U.S. counterintelligence is to nurture an environment in which public and private actors unite around the common goal of information integrity, each playing to their strengths.

 

7.    Conclusion

 

    a. The accelerating convergence of AI technology and malign foreign influence operations demands an urgent evolution in U.S. counterintelligence. Adversaries are exploiting AI-generated content to wage a new kind of cognitive warfare; one that targets our social cohesion, democratic processes, and even corporate assets through deception and manipulation. In answer to the central question of how U.S. counterintelligence should adapt: we must pursue a comprehensive modernization that is technical, operational, and policy-driven, all while steadfastly upholding our democratic values.

 

    b. From a technical standpoint, U.S. CI needs to harness innovation to fight innovation. This means investing in cutting-edge detection capabilities (for deepfakes, bots, and synthetic text) and employing AI as a tool for our analysts to map and anticipate influence campaigns. Just as importantly, it means building seamless public-private digital defense partnerships; aligning with tech companies, researchers, and civil society to share information and co-develop solutions at the speed of relevance. The goal is to dramatically shrink the window of opportunity for adversarial influence operations: to detect them early, attribute them reliably, and counter them decisively before they can achieve their intended effect.

 

    c. Operationally, U.S. counterintelligence must break out of silos and embrace agility. A key recommendation is to establish integrated structures (like a centralized task force or coordination center) that bring together the countless agencies and expertise needed to confront AI-enabled influence operations. Through joint training, contingency planning, and rapid response teams, the CI community can act with unity and speed when a crisis hits; whether it’s a viral deepfake or a coordinated disinformation blitz. We have seen allied nations gain an edge by preparing their institutions and populaces in advance; the U.S. should similarly inculcate a proactive, whole-of-nation approach. That includes training CI officers in the new skills required, educating the public to be savvy consumers of information, and engaging private stakeholders as part of the operational workflow. Our adversaries are coordinated in their efforts; we must be even more so in our defenses.

 

    d. On the legal and policy front, adaptation calls for clear frameworks and safeguards to guide this fight on terms consistent with our laws and ideals. We need updated laws that enable action against foreign AI-driven disinformation (for instance, faster attribution and sanctioning mechanisms) without eroding constitutional rights. Policies should explicitly protect free expression; making it clear that our aim is to thwart foreign malign actors, not to police domestic debate. By being transparent about our strategy and rigorous in oversight, we negate the false narrative that countering disinformation is a cover for censorship. Instead, it is a defense of the integrity of our democratic discourse against hostile interference. In striking this balance, the U.S. can serve as a model for open societies: demonstrating that it is possible to defend against information warfare while preserving liberty.

 

    e. The comparative lessons from allies reinforce these conclusions. The UK’s experience stresses the importance of strong coordination and norms; Estonia’s success highlights education and multi-sector networks; Israel and others show the value of rapid truth-telling and offensive attribution. A common insight is that resilience is not achieved by government alone but by society as a whole. U.S. counterintelligence strategy should therefore embed itself into a larger societal resilience effort; one that champions digital literacy, robust journalism, and civic empowerment as bulwarks against foreign falsehoods. As many democracies have learned, resilience is the best antidote to disinformation: when citizens can critically assess information and are inoculated against manipulation, influence operations falter.

 

    f. Moving forward, the modernization of U.S. counterintelligence to counter AI-enabled influence operations can be envisioned as a strategic roadmap:

 

        (1) Near-term (next 1-2 years): Stand up enhanced interagency coordination mechanisms and pilot detection tools in partnership with industry. Issue clear government guidelines on platform cooperation and civil liberties protections. Conduct extensive scenario planning for upcoming elections and geopolitical flashpoints (e.g. Taiwan Strait contingencies, etc.) involving deepfakes or bot swarms.

 

        (2) Mid-term (3-5 years): Integrate AI-driven analytics into routine CI workflows; hire and train a new cadre of “digital counterintelligence” specialists. Achieve a regularized exchange of information with major tech firms and establish norms (potentially through international agreements or alliances) on combating AI disinformation. Implement public education campaigns on deepfakes in collaboration with schools and media.

 

        (3) Long-term (5+ years): Institutionalize a culture of agility and anticipation in counterintelligence; an enterprise that continuously adapts to technological change. By this point, detection of AI fakes should be so ubiquitous (possibly built into devices and platforms) that they lose much of their potency. The measure of success will be when foreign influence operations, however sophisticated, struggle to gain traction because our defenses (technological, human, and societal) present a unified front.

 

    g. In sum, to protect the integrity of our social and political fabric, U.S. counterintelligence must evolve from a traditionally secretive, reactive posture into a modern, collaborative, and forward-leaning force. This force will detect deception swiftly, expose it transparently, and engage a broad coalition to neutralize its impact. The United States has confronted and overcome waves of foreign subversion before; the AI-enabled variant is simply the latest test. By adapting as outlined (leveraging innovation, strengthening coordination, enacting smart policies, and partnering with the very society it protects) U.S. counterintelligence can rise to meet this 21st-century challenge. In doing so, we will not only thwart our adversaries’ immediate aims but also fortify the foundations of trust and truth upon which our democracy rests.

 

Bibliography:

·         Office of the Director of National Intelligence (ODNI), Annual Threat Assessment of the U.S. Intelligence Community, 2025. (Excerpt on Russia’s use of AI for deepfakes)

·         National Counterintelligence and Security Center (NCSC), Safeguarding Our Elections: Deepfakes to Influence U.S. Elections, Aug. 2020. (Bulletin warning of foreign adversaries using AI deepfakes to shape public opinion)

·         Shannon Bond, NPR/WFAE, “How Russia is using artificial intelligence in its propaganda operations,” July 11, 2024. (DOJ case of Russian bot farm using AI for fake personas; cost reduction of 100x vs. trolls)

·         Judit Gaspar, Belfer Center – Deepfakes: Navigating the Information Space in 2023 and Beyond, Spring 2023. (Discussion of Zelensky deepfake video in 2022 used for geopolitical ends)

·         Todd C. Helmus et al, RAND Corporation, Artificial Intelligence, Deepfakes, and Disinformation: A Primer, July 2022. (Examples of generative text used to mimic troll posts; China’s use of AI to flood #Xinjiang content)

·         Zak Butler, Google Threat Analysis Group, “DRAGONBRIDGE influence operation leaning on AI,” Google TAG blog, June 26, 2024. (China’s prolific Spamouflage Dragon network using spammy AI-generated content on wedge issues)

·         Dan Milmo, The Guardian, “Iran-backed hackers interrupt UAE TV with deepfake news,” Feb. 8, 2024. (Iran’s IRGC operation inserting AI-generated fake newscaster into streaming platforms; Microsoft analysis on Iran’s expanded use of AI since Gaza conflict)

·         Congressional Research Service, The Intelligence Community’s Foreign Malign Influence Center, Nov. 27, 2024. (Background on establishment of ODNI’s FMIC to coordinate on foreign influence threats)

·         Alan Cunningham, U.S. Army War College – War Room, “Counterintelligence in the 21st Century: The Need for Integration,” March 17, 2021. (Call for deeper integration across CI agencies and with private sector to meet emerging threats)

·         U.S. Department of Justice, “Strategic Principles for Combating Foreign Malign Influence on Social Media,” (archived at justice.gov). (DOJ guidelines emphasizing info-sharing with social media companies and protecting First Amendment rights in counter-FMI work)

·         Estonian State Electoral Office case study, Defending the Vote: Estonia Creates a Network to Combat Disinformation 2016–2020, Princeton Univ. (Interagency and multi-sector task force in Estonia; media literacy in schools; free speech constraints)

·         Amy Yee, BBC Future, “The country inoculating against disinformation,” Jan. 31, 2022. (Estonia’s national media literacy education initiative as part of national security; mandatory curriculum on media/influence)

·         Atlantic Council, Democratic Defense Against Disinformation report, June 2021. (Overview of transatlantic efforts; notes that many European governments launched counter-propaganda units and the EU/NATO role in sharing data)

·         CSIS, Foreign Malign Influence Targeting U.S. Corporations, Oct. 2021. (Examples of Russia/China disinformation targeting companies; recommendation that firms improve corporate counterintelligence and info-sharing)

·         FDD, Max Lesser & Mariam Davlashelidze, “America’s Adversaries Use AI for Malign Influence, But Not to Great Effect…Yet,” Feb. 5, 2025. (Summary of Google’s findings on Iran, China, Russia using generative AI tools in influence ops; current limitations)

·         Brookings Institution, Chris Meserole, “How campaigns can protect themselves from deepfakes,” July 2019. (Discussion of deepfake threats to elections and need for rapid response and verification protocols).

·         Lawfare, Quinta Jurecic & Eugenia Lostri, “Why Is the Government Fleeing Key Tech Partnerships Before 2024?”, Feb. 27, 2024. (Describes political/legal pressures causing FBI to scale back social media cooperation and implications for election security).

·         RAND Commentary, Linda Slapakova, “Towards an AI-Based Counter-Disinformation Framework,” Mar. 29, 2021. (Highlights AI as a double-edged sword – enabling threats but also offering tools for defense; discusses shortfalls of AI countermeasures and need to overcome technical/governance barriers).

·         NSA, CISA, FBI, “Deepfakes: Threat and Mitigations” Cybersecurity Information Sheet, 2023. (Interagency guidance on the deepfake threat, including technical indicators and recommended responses for organizations).

·         Knight First Amendment Institute, “Foreign Influence and the Immorality of Censorship,” Aug. 2020. (Argument that robust free speech is the best defense against foreign propaganda and warnings against overbroad censorship in name of countering influence).

 

(All URLs and citations are current as of 2025. Sources include official reports, think tank analyses, news articles, and academic studies as indicated.)

Read more