AI Impersonators at the Gates: Adaptive Security’s $81M Bet on Outsmarting Deepfakes
Subtitle: As AI-driven scams surge, Adaptive Security arms enterprises with simulation tools and lands hefty Series B funding.
In a world where a few seconds of synthesized audio can unleash chaos, the battleground of cybersecurity is rapidly shifting. Last week, New York-based startup Adaptive Security announced an $81 million Series B funding round, positioning itself at the forefront of the fight against AI-powered impersonation threats. But can a torrent of venture capital and clever simulation tech really keep up with the pace of evolving cyber deception?
Adaptive Security’s origin story is rooted in a new breed of cyberattack. In 2024, the company set out to confront the wave of AI-generated threats - think voice clones that mimic your CEO, or emails so tailored they could fool even seasoned staff. With their latest funding, the startup is doubling down on a platform that doesn’t just warn about these dangers, but lets organizations rehearse them in real time.
Here’s how it works: Adaptive Security leverages open source intelligence and company-specific data to design hyper-realistic attack simulations. Employees are put to the test with deepfake videos, vishing calls, and sophisticated phishing emails. Those who fall for the ruse are immediately flagged for personalized training, and in some cases, have their access controls temporarily tightened. The idea? Turn every mistake into a teachable moment - before a real criminal gets through.
The approach is both granular and scalable. The platform offers interactive modules in 39+ languages, making it accessible across global teams. Accessibility features and role-based admin controls ensure that training is not only widespread, but also tailored to different organizational layers. Automated risk scoring and threat triage help security teams zero in on their weakest links, using real user behavior rather than hypothetical scenarios.
The urgency is clear. As CEO Brian Long notes, “A few seconds of audio or a short video clip is now enough for anyone to generate a convincing clone.” With AI impersonations moving from proof-of-concept to daily threat, organizations can no longer rely on gut instinct or manual awareness campaigns alone.
The real test for Adaptive Security - and its investors - will be whether simulated attacks and just-in-time training can truly keep pace with the accelerating arms race of AI deception. For now, with deep pockets and an ambitious roadmap, the company is betting that better rehearsals mean fewer real breaches. In the age of synthetic voices and faces, vigilance is no longer optional - it’s adaptive, or nothing.
WIKICROOK
- Deepfake: A deepfake is AI-generated media that imitates real people’s appearance or voice, often used to deceive by creating convincing fake videos or audio.
- Vishing: Vishing is a phone scam where attackers impersonate trusted entities to steal sensitive information or money through deceptive calls.
- Smishing: Lo smishing è una truffa digitale che sfrutta SMS ingannevoli per rubare dati personali o soldi alle vittime, spesso fingendosi enti affidabili.
- Open Source Intelligence: Open Source Intelligence (OSINT) is the collection and analysis of information from publicly available sources, like social media and public records, for investigations.
- Risk Scoring: Risk scoring assigns values to threats or vulnerabilities based on their likelihood and impact, helping organizations prioritize and manage cybersecurity risks efficiently.