Skip to content

A New Joint Analytic Report (JAR) from the Cyber Threat Alliance: Cybersecurity in the Age of GenAI


Published:

By Chelsea Conard

The Cyber Threat Alliance (CTA) is excited to announce the publication Joint Analytic Report (JAR): Cybersecurity in the Age of GenAI. The CTA community selected this topic for its growing relevance and reflects the collaborative efforts of 16 CTA organizations. This report is broken into two parts. Part I, Combating GenAI Assisted Cyber Threats, addresses the use of GenAI tools for malicious purposes. Part II, Navigating Cyber Threats to GenAI Systems, examines cyber threats to these tools.

The JAR provides a factual analysis of how malicious actors are leveraging GenAI for AI-assisted cyber threats. Despite claims that GenAI is fundamentally transforming cybercrime, the reality is far less dramatic. Adversaries are indeed using these tools to automate and enhance existing threats, such as more sophisticated phishing campaigns, convincing deepfakes, and elaborate investment scams; however, this report dismantles the myth of GenAI as a revolutionary force. Instead, this report highlights GenAI’s real impact: enabling malicious actors to generate threats with greater speed and a lower barrier to entry. Thus, GenAI technology does not make threat actors smarter. Instead, it enhances their efficiency.

The absence of truly new or revolutionary tactics from malicious actors puts organizations in a strong position to defend against AI-assisted threats by using existing security measures.  Foundational cybersecurity practices such as multi-factor authentication, regular software updates, and endpoint monitoring remain essential, even as AI-driven threats evolve. At the same time, the increasing use of GenAI presents an opportunity to enhance defenses beyond technology and focus on user education and awareness. Organizations should emphasize training that prioritizes content-based analysis and fosters a culture of healthy skepticism. Employees should be encouraged to ask questions like, “Did I initiate this request” or “Is this communication consistent with prior exchanges?” This mindset will help defend against traditional cyberthreats, but it is especially critical to defend against threats like deepfakes. Other defenses can rely on both technical and process-based measures. Technical tools can analyze content for manipulation, such as audio mismatches. When these tools fall short, organizations can rely on process-based measures, including multi-channel verification, dual approvals or pre-arranged authentication phrases, to provide critical safeguards.

The JAR’s second part examines how AI systems themselves are emerging as targets. Adversaries are exploiting vulnerabilities in the AI ecosystem through techniques such as data poisoning, prompt injection, and model tampering. These threats expose critical weak points across the AI lifecycle, from data pipelines and model development to deployment and endpoint security. Addressing these risks requires organizations to adopt a comprehensive approach to security. Rigorous data governance mechanisms, such as binary authorization, can ensure that only validated datasets are used for training, and anomaly detection tools can monitor for irregularities that might indicate tampering. To further strengthen defenses, organizations can implement input filtering and adversarial training to reduce vulnerabilities to manipulated inputs. Role-based authentication and access controls can help protect sensitive systems by limiting access to only authorized users, and continued monitoring for suspicious activity remains essential to address insider threats. It is equally important to recognize that malicious actors can introduce supply chain risks by compromising pre-trained models, third-party APIs, or data sources. To mitigate such risks, organizations should vet third-party tools and components, regularly update systems, and monitor for signs of manipulations or compromise across the supply chain.

The JAR highlights that GenAI serves as an efficiency multiplier, making attacks faster and more scalable, but organizations do not need to reinvent their defenses. Instead, reinforcing existing security measures, coupled with targeted training, will provide effective protection. As with other aspects of the AI “revolution,” adversaries’ use of GenAI has not yet matched the hype, providing a window in which to prepare for AI-driven threats. However, this window may not remain open for very long, and organizations should take action now to be ready. 

To access the full report, click the following link. A special thank you to Craig Newmark Philanthropies for their generous financial support of the CTA JARs collaboration.