Securing Advanced AI Requires New Infrastructure Security Approaches
As artificial intelligence systems become more advanced, OpenAI believes new security measures will be needed to protect these powerful technologies from sophisticated cyber threats. Their mission is to ensure advanced AI benefits society, but this requires building trustworthy AI infrastructure resilient against those seeking to misuse or steal the underlying technology.
The Evolving Threat Landscape
AI technology is highly prized and actively pursued by sophisticated cyber threat actors with strategic motivations. OpenAI anticipates these threats will intensify as AI grows more strategically important. A key priority is safeguarding AI model weights – the trained parameters that embody the knowledge and capabilities resulting from the AI training process. While crucial for powering AI applications, the online availability required to use model weights also creates potential attack vectors.
Limitations of Current Security Controls
Conventional cybersecurity controls like network monitoring and access management are valuable, but have limitations for protecting advanced AI given its unique requirements. Unlike conventional software assets, AI model weights must be readily accessible online to enable their use, creating potential avenues for theft if the infrastructure is compromised. New defensive approaches are needed.
Six Proposed Security Measures
OpenAI advocates an evolution of secure infrastructure paradigms, similar to how past technological shifts like automobiles and the internet drove safety and security innovations. They propose six complementary security measures:
- Trusted computing for AI accelerators
- Robust network and tenant isolation
- Advanced operational and physical datacenter security
- AI-specific security compliance standards
- Leveraging AI for cyber defense
- Resilience through redundancy and research
I. Trusted AI Accelerator Computing
Technologies like confidential computing could extend hardware-based encryption and attestation from CPUs to AI accelerators like GPUs. This could enable model weights to remain encrypted until loaded on authorized GPUs and allow per-GPU encryption keys, mitigating threats from compromised hosts or storage.
II. Network and Tenant Isolation
Strong network segmentation would allow critical AI assets to operate air-gapped from public networks to reduce attack surfaces. Robust tenant isolation architectures are also needed to prevent cross-tenant access risks.
III. Advanced Datacenter Security
Rigorous physical and operational security controls beyond current best practices, potentially incorporating new techniques like remote “kill switches” and tamper-evident monitoring.
IV. AI Security Standards
Emergence of specialized auditing regimes and compliance frameworks tailored to the unique challenges of securing AI systems.
V. AI for Cyber Defense
Leveraging AI to enhance cyber defenses by accelerating security workflow automation and analysis of high-volume data sources.
VI. Resilience via Redundancy and Research
Implementing these measures in a defensible way through continuous research, embrace of redundancy for resilience, and adversarial testing to identify gaps and circumvention techniques.
OpenAI encourages collaboration across industry, research, and government to further develop robust security paradigms capable of protecting society’s interests as AI capabilities progress.