Amazon Bedrock, LangSmith, and SGLang have been found vulnerable to data exfiltration, token theft, and remote code execution. Learn how DNS queries, URL injection, and unsafe pickle deserialization are putting AI platforms at risk.
The Trump administration is reimagining AI security as a competitive asset, urging industry collaboration and rapid innovation. But as key cyber officials exit and regulatory safeguards are rolled back, can American AI outpace global rivals without sacrificing security?
Researchers reveal a novel browser exploit where custom fonts hide malicious commands from AI assistants, allowing hackers to bypass automated security checks and target unsuspecting users.
A fresh study exposes how most CISOs are trying to secure modern AI with outdated skills and legacy tools, leaving organizations exposed to new risks.
Researchers uncovered a DNS vulnerability in AWS Bedrock’s AI Code Interpreter, enabling attackers to exfiltrate data from supposedly isolated environments. With AWS opting for warnings over a technical fix, experts urge organizations to rethink their AI security strategies.
A newly discovered flaw in LangSmith, a widely used AI observability platform, exposed enterprise AI accounts to stealthy hijacks and data leaks. Here's how the attack worked, why it mattered, and what organizations must do to stay secure.
Artificial intelligence and large language models are transforming cloud security from reactive vigilance to proactive, intelligent defense, marking a new era in the fight against cybercrime.
Bold Security, founded by Nati Hazut, launches from stealth with $40M to revolutionize cybersecurity. The startup deploys AI agents directly on endpoints for real-time protection, emphasizing privacy and speed as it targets global expansion.
Microsoft confronts the explosive rise of agentic AI—deploying centralized control platforms and AI-powered defenses to counter both new threats and the unintended consequences of AI proliferation in the enterprise.
In a dramatic demonstration, security researchers tricked Perplexity’s Comet AI browser into a phishing scam in under four minutes, exposing novel vulnerabilities in agentic AI browsers and raising concerns about the future of online scams.