From Guidance to Practice: Supporting AI Data Security Through HackerOne

Vanessa Booth
Policy Analyst
Image
Cybersecurity image with two locks above mountains

Today, federal agencies, including the Cybersecurity and Infrastructure Security Agency (CISA) and international partners, released new joint guidance titled “AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems.” This document outlines comprehensive recommendations to mitigate risks across the AI lifecycle—from data poisoning and adversarial attacks to metadata manipulation and supply chain threats.

Many of the best practices reflected in this guidance, including AI red teaming, vulnerability disclosure programs (VDPs), and other forms of security testing, are enhanced through collaboration with a global community of experts. Ultimately, the guidance highlights the value of proactive, risk-based approaches to AI security for any organization, emphasized in the referenced security frameworks and controls.

Key Insights from the Guidance & How HackerOne Supports Them

  • Red Teaming AI Models & Training Pipelines: The guide emphasizes the importance of rigorous testing across training pipelines to detect threats like model inversion and data poisoning. HackerOne’s AI Red Teaming services deliver offensive security assessments that identify vulnerabilities before they can be exploited, securing end-to-end AI workflows.
  • Securing the AI Data Supply Chain: Risks associated with poisoned, manipulated, or unreliable datasets—whether curated, collected, or scraped—are a central concern. HackerOne works with organizations to validate the integrity of training datasets and protect against tampering across the data supply chain.
  • Detecting Malicious or Inaccurate Data: The guidance highlights corrupted data inputs and malicious metadata as critical causes of model degradation and bias. HackerOne combines automated analysis with human expertise to detect and help remediate these issues early.
  • Coordinated Vulnerability Disclosure for Community Defense: With a focus on collaboration among data providers and downstream users, the guidance underscores the importance of coordinated vulnerability disclosure. HackerOne’s VDP platform enables responsible disclosure of AI and data vulnerabilities, facilitating proactive community defense.
  • Monitoring for Data Drift and Model Degradation: As AI systems evolve, continuous risk monitoring becomes essential. The guidance recommends sustained testing and response planning. HackerOne supports persistent, long-term model security through AI red teaming, bug bounty programs, and vulnerability management to track and mitigate emerging threats.

Alignment with Established Frameworks and Controls

This new guidance references several foundational standards—including NIST SP 800-37 Rev. 2 (Risk Management Framework), the NIST AI RMF, and NIST SP 800-53 Rev. 5. These frameworks and controls promote practices such as penetration testing, adversarial testing, vulnerability management, and continuous monitoring—core components of HackerOne’s approach to proactive security.

By reinforcing these frameworks, the joint guidance further validates the importance of real-world testing, coordinated disclosure, and adversarial resilience in AI development and deployment.

Why This Matters for the AI Ecosystem 

Securing AI requires a holistic, lifecycle-wide approach to data management, model assurance, and continuous validation. This guidance reflects that view, encouraging greater collaboration between policymakers, researchers, and private sector leaders.

HackerOne is committed to advancing secure-by-design AI by helping developers embed security from the ground up—through rigorous testing, responsible disclosure, and expert engagement across the broader security ecosystem.

We encourage all organizations building or operating AI systems to adopt these principles as foundational to their cybersecurity strategy.

Contact us to learn how HackerOne can help operationalize these AI Data Security best practices and defend your AI systems against emerging threats.