The SAIF Map, a new paradigm and shared vocabulary to navigate the landscape of AI risks and controls
Why SAIF?
SAIF is Google’s Secure AI Framework, which offers guidance for building and deploying AI responsibly.
As AI technology rapidly advances and threats continually evolve, the challenge of protecting AI systems, applications, and users at scale requires that developers have a high-level understanding of AI-specific privacy and security risks in addition to established secure coding best practices.
SAIF describes Google’s approach for addressing AI risks—including security of data, models, infrastructure, and applications involved in building AI—and is aligned with Google's Responsible AI practices, to keep more people safe online.
SAIF is designed to help mitigate risks specific to AI systems like model exfiltration, data poisoning, injecting malicious inputs through prompt injection, and sensitive data disclosure from training data.
What does SAIF offer?
Organizations that create AI products can use SAIF to develop more secure AI products and mitigate these risks during the development process. Organizations that use AI products can also use SAIF to understand - and address - the potential risks of using AI.
-
-
An interactive SAIF Risk Self Assessment to understand your organization’s AI risks, regardless of organizational size, sector, or current security posture
-
The SAIF Secure AI Development Primer, which explains the AI development process through the lens of security risks
-
Resources containing In-depth technical information about securing different parts of AI systems
Who should use SAIF?
SAIF is intended to guide any organization involved in creating AI models, products, or features. Leadership can turn to SAIF to understand the broad landscape of risks that AI introduces to their organization and learn about the controls that can address those risks. Technical practitioners, such as security specialists or AI practitioners, can use SAIF as a technical resource for guidance on how to secure specific AI systems throughout the AI development lifecycle.
SAIF for Technical Practitioners
- Explore the SAIF Map
- Read the Secure AI Development Primer
- Complete the Risk Self Assessment
- Identify Risks and Controls relevant to your domain and systems
- Explore Resources for more in-depth information
SAIF for Executives
- Complete the Risk Self Assessment
- Discuss your Risk Report with technical teams
- Read the Secure AI Development Primer
SAIF for Governance
- Complete the Risk Self Assessment
- Discuss your Risk Report with technical teams
- Audit controls and track progress regularly
Where does SAIF come from?
Introduced in 2023, SAIF is an externalization of Google’s own internal framework for securing its production and use of AI.
It leverages Google’s unique position in the field:
- Decades of experience building secure-by-default infrastructure and data handling practices, at enormous scale
- One of the longest industry track records of building secure AI, drawing directly from from knowledge of our product engineering teams and research from internal research organizations including Google DeepMind
SAIF is a distillation of lessons learned in both areas, combined to bring comprehensive AI security knowledge to the wider industry.
Why now?
The potential of AI, especially generative AI, is immense. AI is rapidly becoming integrated into aspects of our lives that involve sensitive data and impact critical decisions, from cancer diagnoses to customer service interactions. With AI innovation, though, the industry needs security standards for building and deploying AI responsibly for effective risk management.
Yet, the novel risks introduced by AI are not yet widely known by technical practitioners, and traditional software security measures may not address the new dimensions that AI introduces into technical development. The breakneck pace of AI advancements means that failing to prioritize security from the outset will result in costly and complex remediation efforts down the line. A framework is needed to safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default.
What else should AI developers consider?
Beyond security, building AI responsibly requires consideration of important challenges that must be addressed thoughtfully and affirmatively. SAIF is just one part of a holistic approach to responsible AI development. Google's responsible AI practices also address other aspects of building AI responsibly, such as fairness, interpretability, privacy, and safety.