Updated
Updated · OpenAI · May 8
OpenAI deploys Codex with controls for safe operation
Updated
Updated · OpenAI · May 8

OpenAI deploys Codex with controls for safe operation

11 articles · Updated · OpenAI · May 8
  • The company said safeguards include sandboxing, approval policies, restricted outbound network access, secure authentication and OpenTelemetry exports across Codex desktop, CLI and IDE tools.
  • An auto-review mode can approve low-risk requests, while higher-risk actions outside the sandbox require review or are blocked; activity also feeds ChatGPT enterprise compliance logs and security triage systems.
  • OpenAI said the setup is aimed at enterprise and education customers adopting autonomous coding agents, giving security teams audit trails and policy controls while preserving developer productivity.
When an AI agent triages its own security alerts, who is really in control of the system?
As AI security tightens, are we killing the productivity gains that coding agents were created to deliver?
Can any digital sandbox truly contain an AI that is designed to be smarter than its creators?

GPT-5.3-Codex Launch and Codex Security: Transforming Developer Workflows with AI-Driven Vulnerability Scanning

Overview

In early 2026, OpenAI launched GPT-5.3-Codex, its most advanced coding AI capable of handling complex developer tasks beyond code generation. To address cybersecurity risks, OpenAI introduced Codex Security, an AI tool that detects and fixes software vulnerabilities through automated threat modeling and sandbox testing. Alongside, OpenAI deployed a comprehensive safety system including access controls and monitoring, with the Trusted Access for Cyber program ensuring only vetted defenders use high-risk features. Despite strong safeguards, challenges like developer deskilling, compliance gaps, and liability remain. OpenAI is expanding enterprise integration via partnerships and a new plugin system, while preparing to meet upcoming regulations such as the EU AI Act.

...