On April 11, 2025, the AI Security Institute (AISI) published a paper on evaluating control measures for artificial intelligence (AI) agents. The paper explores how AI control measures—can be scaled as large language models (LLMs) become more autonomous and capable of acting without human oversight.What is AI control?The paper defines AI control as a set of technical and procedural safeguards designed to prevent AI models from causing harm, even if they pursue unintended goals. Unlike alignment strategies that aim to shape model behaviour during training, the paper notes that AI control focuses on restricting actions post-deployment. Examples of control measures discussed in the paper inclu