Vendors selling enterprise security love to demo their detection dashboards. Pretty graphs, threat feeds, AI-correlated anomalies. The visualization implies the breach is going to come from somewhere external and clever. The actual breach is usually going to come from someone in accounting clicking a link in a Tuesday morning email.
This isn’t a new finding. Verizon’s annual Data Breach Investigations Report puts the human element โ phishing, credential misuse, error, social engineering โ at roughly 70 to 80% of confirmed breaches, year after year. The number doesn’t move much, regardless of how much the perimeter spending grows.
Why the human vector is so durable
Phishing works because it doesn’t fight technical defenses; it borrows the user’s authenticated session and credentials. Once Karen in payroll types her password into a fake Microsoft login page, the firewall, the EDR agent, and the SOC dashboard see a normal authenticated user reading email. Multi-factor authentication helps, but attackers have adapted with MFA-fatigue prompts, SIM-swap attacks, and adversary-in-the-middle proxies that steal session cookies in real time.
Social engineering compounds the problem. The 2023 MGM Resorts breach started with a phone call to a help desk. The 2020 Twitter breach pivoted on internal chat-tool access obtained through phone-based pretexting. The Lapsus$ group built a track record of major intrusions almost entirely on convincing tier-1 support to reset credentials. Sophisticated technical defenses don’t matter if a person with the keys agrees to hand them over.
What annual training does and doesn’t do
Most companies respond by mandating annual security awareness training โ a 30-minute video, a quiz, a checked compliance box. Studies of these programs show modest, short-lived improvement that decays within weeks. Click rates on simulated phishing drop briefly, then return to baseline. Training of this format is essentially security theater.
What does work is more frequent, contextual, and operational. Continuous simulated phishing with immediate feedback at the moment of the click. Just-in-time prompts (“this email is from outside your org, with a link to a credential page โ verify before entering”). Hardware security keys for high-privilege accounts, since they functionally eliminate phishable credential reuse. And organizational design changes: making it easy and consequence-free to report a suspected click, so the SOC gets a 30-second head start rather than a 30-day forensic investigation.
The cultural piece nobody costs out
Security cultures that punish reporting produce silence, not safety. Engineers who fear blame don’t escalate the misconfigured S3 bucket they noticed; admins who get yelled at for clicking the test phish don’t report the real one. The mature posture treats the click as a system failure โ the simulated email got through filters, the warning banners didn’t fire, the user was busy โ rather than a personal one.
The takeaway
Security spending follows a perimeter mindset because perimeters are easy to put on a slide. The breaches follow people. Reduce phishable credentials, design fast no-blame reporting, train continuously rather than annually, and accept that the best ROI in infosec is rarely the next dashboard subscription.
Leave a Reply