Blind Spots in the Boardroom: Why 81% of Human Cyber Risks Go Unseen
As Living Security announces HRMCon 2025, a new report reveals organizations are missing most human-driven cyber threats - leaving gaping holes in digital defenses.
Fast Facts
- Organizations detect only 19% of human risk activity, according to the 2025 State of Human Cyber Risk Report.
- HRMCon 2025, hosted by Living Security, convenes in Austin and online on October 20, 2025.
- Speakers include CISOs from Aurora, Mastercard, and analysts from Forrester.
- Living Security’s Unify platform claims to boost risk visibility threefold over traditional training.
- Human-driven threats - like credential misuse and insider risks - often evade standard security tools.
The Hidden Majority: A Crisis of Human Risk Blindness
Imagine a security guard watching only one out of every five doors in a sprawling skyscraper. That’s the dire reality painted by the latest findings from Living Security and the Cyentia Institute: on average, businesses detect just 19% of risky behaviors by their own people. The rest - misused passwords, policy violations, and subtle signs of insider threats - slip by, invisible to even the most advanced technical defenses.
This “human risk blindness” isn’t new, but its scale is sobering. Despite billions spent on firewalls and AI-powered threat detection, attackers still exploit the weakest link: people. From the infamous 2020 Twitter hack (where attackers tricked employees into giving up credentials) to recent phishing campaigns leveraging deepfake audio, human error and manipulation remain cybersecurity’s Achilles’ heel. The Living Security report echoes similar warnings from Verizon’s 2025 Data Breach Investigations Report, which found that over 74% of breaches involved a human element.
HRMCon 2025: Turning Awareness into Action
In response, Living Security is making a bold bet on culture and accountability with HRMCon 2025. The one-day event at Austin’s Q2 Stadium - and streamed globally - promises more than lectures. It’s a call to arms for CISOs, HR, and risk managers to operationalize “human risk management” (HRM): a discipline that treats risky behavior as a measurable, manageable business issue, not just a training checkbox.
Headliners include Brett Wahlin (CISO, Aurora), who draws on counterintelligence to reimagine the human factor; Tim Taylor (Mastercard), who offers a 90-day playbook for exposing hidden risks; and Forrester’s Jinan Budge, who explores how AI “agents” might soon help spot - and even prevent - risky behaviors before they spiral. Sessions promise real-world frameworks, case studies, and practical tools to close the visibility gap, with free registration and CPE credits for security pros.
Why the Old Playbook Fails - and What Comes Next
Traditional “security awareness” training - think annual quizzes and phishing simulations - has proven woefully inadequate. Most platforms offer little more than compliance checkmarks, failing to identify the small percentage of users who pose the greatest risk. Living Security’s Unify platform, for example, claims to integrate behavior, identity, and threat data to pinpoint the 8–12% of users most likely to cause a breach, automating targeted interventions in real time. The goal: move from passive awareness to active risk reduction.
This new approach is gaining traction as organizations grapple with hybrid work, evolving insider threats, and the rise of AI-driven social engineering. As the HRM market grows - Forrester named Living Security a global leader - companies are under pressure to prove not just that they’re training employees, but that they’re measurably reducing human risk. In a world where people are both the first line of defense and the biggest vulnerability, the stakes have never been higher.
WIKICROOK
- Human Risk Management (HRM): Human Risk Management is a cybersecurity approach that identifies and reduces risks from employee behaviors, focusing on people rather than just technical threats.
- Credential Misuse: Credential misuse is when passwords or access privileges are used in unauthorized or risky ways, either by mistake or with malicious intent.
- Insider Threat: An insider threat is when someone within an organization misuses their access to systems or data, intentionally or accidentally causing harm.
- Phishing Simulation: A phishing simulation is a fake phishing attack used to test and train employees to recognize and avoid real phishing scams.
- Autonomous AI Agent: An Autonomous AI Agent is an AI system that independently analyzes data, makes decisions, and acts - commonly used to detect and counter cyber threats.