As frontier artificial intelligence (AI) models—that is, models that match or exceed the capabilities of the most advanced AI models at the time of their development—become more capable, protecting them from malicious actors will become more important. In this report, we explore what it would take to protect the learnable parameters that encode the core capabilities of an AI model—also known as its weights—from a range of potential malicious actors. If AI systems rapidly become more capab