Application programming interface company Akto Io Inc. today announced the launch of GenAI Security Testing, a new solution aimed at enhancing the security of generative artificial intelligence and ...
Anthropic's Claude Opus 4.6 surfaced 500+ high-severity vulnerabilities that survived decades of expert review. Fifteen days later, they shipped Claude Code Security. Here's what reasoning-based ...
The rapid adoption of Large Language Models (LLMs) is transforming how SaaS platforms and enterprise applications operate.
About 77% of organizations have adopted or are exploring AI in some capacity, pushing for a more efficient and automated workflow. With the increasing reliance on GenAI models and Language Learning ...
CI Spark automates the generation of fuzz tests and uses LLMs to automatically identify attack surfaces and suggest test code. Security testing firm Code Intelligence has unveiled CI Spark, a new ...
Exposed endpoints quietly expand attack surfaces across LLM infrastructure. Learn why endpoint privilege management is important to AI security.
The new ML/AI pentesting solution combines the company's proven testing methodology with its deep adversarial machine learning knowledge to help organizations build more secure models MINNEAPOLIS, Aug ...
Unrelenting, persistent attacks on frontier models make them fail, with the patterns of failure varying by model and developer. Red teaming shows that it’s not the sophisticated, complex attacks that ...
Bonn, Germany, September 13th, 2023 – Code Intelligence today announced CI Spark, an LLM-powered AI-assistant for software security testing. CI Spark automatically identifies attack surfaces and ...
Imagine this scenario. You’ve launched a shiny, new AI assistant to help serve your customers. A user goes to your website and makes some seemingly innocent requests to the assistant, which cheerfully ...
With large language models (LLMs) more widely adopted across industries, securing these powerful AI tools has become a growing concern. At Black Hat Asia 2025 in Singapore this week, a panel of ...
One of the biggest threats with AI today is that it reads untrusted content. That means that attackers can hide malicious instructions inside input for AI, including web pages, PDFs and user uploads.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results