A new technique has been documented that can bypass GPT-5’s safety systems, demonstrating that the model can be led toward harmful outputs without receiving overtly malicious prompts. The method, ...