The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) CAISI issued a Request for Information (RFI) concerning the secure development and deployment of AI agent systems. This RFI seeks insights regarding unique security challenges—including adversarial data interactions and misaligned objectives—inherent in combining AI models with software functionality. Input will inform future guidelines and best practices for AI agent security.
CAISI RFI Targets AI Agent System Security Risks
CAISI is requesting information to address security concerns specific to AI agent systems, which autonomously plan and act in real-world contexts. Unlike typical software, these systems face risks from adversarial data—like indirect prompt injection—and vulnerabilities stemming from insecure AI models susceptible to data poisoning. Additionally, even without malicious input, models can exhibit harmful behavior through specification gaming or misaligned goals. The request for information aims to identify how existing cybersecurity measures apply, or fall short, when protecting AI agents. CAISI specifically seeks methods for measuring agent security, anticipating risks during development, and constraining agent access within deployment environments.
Responses are due by March 9, 2026, and will be used to develop future guidelines and inform ongoing research through docket number NIST-2025-0035.
NIST Seeks Input on AI Agent Development & Deployment Methods
The inquiry specifically targets risks beyond typical software vulnerabilities, focusing on issues like indirect prompt injection and data poisoning—methods used to manipulate model outputs. Understanding these unique threats is crucial, as broader deployment of AI agents could impact public safety and national security. The request for information also centers on quantifying agent security and proactively identifying risks during the development process. CAISI seeks input on whether current cybersecurity methods adequately address these new challenges and how to best constrain agent access within deployment environments. Responses, accepted until March 9, 2026, will shape future guidelines and research evaluating AI agent security, with submissions possible through regulations.gov under docket number NIST-2025-0035.
These security challenges not only hinder adoption today but may also pose risks for public safety and national security as AI agent systems become more widely deployed.
