CyberAI - Line of Research

CyberAI

CSET’s CyberAI Project focuses on the intersection of AI/ML and cybersecurity, including analysis of AI/ML’s potential uses in cyber operations, the potential failure modes of AI/ML applications for cyber, how AI/ML may amplify future disinformation campaigns, and geostrategic competition centered around cyber and AI/ML.

Recent Publications

All Publications
Reports

The Use of Open Models in Research

Kyle Miller,Mia Hoffmann,and Rebecca Gelles
|October 2025

This report analyzes over 250 scientific publications that use open language models in ways that require access to model weights and derives a taxonomy of use cases that open weights enable. The authors identified a diverse range of seven open-weight use cases that allow researchers to investigate a wider scope...

Read More

Reports

Harmonizing AI Guidance: Distilling Voluntary Standards and Best Practices into a Unified Framework

Kyle Crichton,Abhiram Reddy,Jessica Ji,Ali Crawford,Mia Hoffmann,Colin Shea-Blymyer,and John Bansemer
|September 2025

Organizations looking to adopt artificial intelligence (AI) systems face the challenge of deciphering a myriad of voluntary standards and best practices—requiring time, resources, and expertise that many cannot afford. To address this problem, this report distills over 7,000 recommended practices from 52 reports into a single harmonized framework. Integrating new...

Read More

This roundtable report explores how practitioners, researchers, educators, and government officials view work-based learning as a tool for strengthening the cybersecurity workforce. Participants engaged in an enriching discussion that ultimately provided insight and context into what makes work-based learning unique, effective, and valuable for the cyber workforce.

Read More

Recent Blog Articles

All Blog Articles

Red-teaming is a popular evaluation methodology for AI systems, but it is still severely lacking in theoretical grounding and technical best practices. This blog introduces the concept of threat modeling for AI red-teaming and explores the ways that software tools can support or hinder red teams. To do effective evaluations,...

Read More

AI Control: How to Make Use of Misbehaving AI Agents

Kendrea Beersand Cody Rushing
|October 1, 2025

As AI agents become more autonomous and capable, organizations need new approaches to deploy them safely at scale. This explainer introduces the rapidly growing field of AI control, which offers practical techniques for organizations to get useful outputs from AI agents even when the AI agents attempt to misbehave.

Read More

China’s Artificial General Intelligence

William Hannasand Huey-Meei Chang
|August 29, 2025

Recent op-eds comparing the United States’ and China’s artificial intelligence (AI) programs fault the former for its focus on artificial general intelligence (AGI) while praising China for its success in applying AI throughout the whole of society. These op-eds overlook an important point: although China is outpacing the United States...

Read More

Our People

See All

John Bansemer

Non-Resident Senior Fellow

Ali Crawford

Senior Research Analyst

Andrew Lohn

Senior Fellow

Colin Shea-Blymyer

Research Fellow

Jenny Jun

Non-Resident Fellow

Jessica Ji

Senior Research Analyst

Josh A. Goldstein

Research Fellow

Kendrea Beers

Research Analyst

Kyle Crichton

Research Fellow

Kyle Miller

Research Analyst

Related News

All News
John Bansemer and Kyle Miller shared their expert analysis in a report published by the International Institute for Strategic Studies. In their piece, they highlight the release of DeepSeek’s open-weight AI model “R1” in January 2025 and its major impact on global AI competition, especially between China and the United States.
In an article published by NPR which the discusses the surge in AI-generated spam on Facebook and other social media platforms, CSET's Josh A. Goldstein provided his expert insights.
In a new preprint paper, CSET's Josh A. Goldstein and the Stanford Internet Observatory's Renee DiResta explored the use of AI-generated imagery to drive Facebook engagement.
In an article published by the Brennan Center for Justice, Josh A. Goldstein and Andrew Lohn delve into the concerns about the spread of misleading deepfakes and the liar's dividend.
In a WIRED article discussing issues with Microsoft's AI chatbot providing misinformation, conspiracies, and outdated information in response to political queries, CSET's Josh A. Goldstein provided his expert insights.
In a KCBS Radio segment that explores the rapid rise of AI and its potential impact on the 2024 election, CSET's Josh Goldstein provides his expert insights.

Explore a Different Topic