Cyber Defense Can’t Keep Up—Why CISOs Are Betting on New AI-Security Players
Cyber Defense Can’t Keep Up—Why CISOs Are Betting on New AI-Security Players
Mike May — CEO & CISO, Mountain Theory
Venture investors poured $20 billion into cybersecurity startups in 2024, even as overall tech funding slumped 35 percent. The magnet? New attacks are being launched, powered by generative AI and a global shortage of specialists to stop them. IDC now forecasts security spending will jump another 12 percent in 2025, with AI-driven defenses the fastest-growing slice (Worldwide Security Spending to Increase by 12.2% in 2025 as ... - IDC). Against that backdrop, Mountain Theory and a handful of upstarts are racing to harden the very models enterprises have begun to trust with sensitive data.
Why demand is spiking
Breach costs keep climbing. IBM’s 2024 report pegs the average incident at $4.88 million; organizations deploying security AI and automation trim that by $2.22 million (Cost of a data breach 2024 - IBM).
Threat volume grows. ENISA’s latest landscape flags model poisoning and supply-chain exploits as top emerging risks (ENISA Threat Landscape 2024 - European Union).
Talent gap widens. ISC² estimates a 4 million-person shortfall in cyber talent worldwide.
Regulators turn up heat. The OECD rewrote its AI Principles in 2024 to “keep pace with technology,” while the EU AI Act sets hefty fines for unmitigated risk (OECD updates AI Principles to stay abreast of rapid technological ...).
Board pressure rises. PwC found 84 percent of U.S. executives now rank “trustworthy AI” as a top-three priority (PwC's 2024 US Responsible AI Survey).
Google Cloud CISO Phil Venables sums it up: “Expanding security foundations into the AI stack is non-negotiable” (Google's Secure AI Framework (SAIF), Exclusive: Google lays out its vision for securing AI).
Where innovation is happening
AI Trust-Layer Platforms
HiddenLayer, Robust Intelligence, and Protect AI monitor prompts, outputs, and weight files in real time—“an EDR for models” (Tackling Trust, Risk and Security in AI Models - Gartner, ENISA Threat Landscape 2024 - European Union).
Posture-Management Dashboards
Wiz just added AI-SPM views that surface shadow checkpoints and LoRA adapters across multi-cloud estates (Cost of a data breach 2024 - IBM).
Secure-by-Design Frameworks
Google’s SAIF lists six baseline controls—model provenance, runtime logging, automated containment—that early adopters are baking into CI/CD pipelines (Google's Secure AI Framework (SAIF)).
Mountain Theory’s differentiation
Model-layer telemetry captures every prompt, gradient, and weight delta.
Autonomous containment sandboxes suspect behaviors in milliseconds—mirroring Google’s SAIF principles.
Supply-chain attestation signs each dataset and checkpoint, aligning with ISO 42001 clauses on AI management (OECD updates AI Principles to stay abreast of rapid technological ...).
Patent-pending trust kernel built by a founding team that includes the co-creator of multi-factor auth and a former NORAD CTO.
Analyst firm CB Insights lists Mountain Theory as a “category creator” alongside HiddenLayer and Protect AI (Top HiddenLayer Alternatives, Competitors - CB Insights).
Selecting a partner: five questions
Does the platform map to the NIST AI RMF? Continuous governance beats ad-hoc scans.
Can it verify weight lineage and detect drift? Prevents rogue checkpoints in prod.
How deep is real-time monitoring? Models, vector stores, embeddings, and fine-tunes should all stream logs.
What’s the incident-response SLA? Ask for sub-hour containment targets.
Is the roadmap funded? Series-A vendors need a runway long enough to survive your three-year contract.
Leadership checklist for 2025
Commission a machine-learning bill of materials across all business units.
Pilot at least one trust-layer or AI-SPM tool; benchmark latency and coverage.
Align quarterly risk reviews with ISO 42001 and EU AI Act draft annexes.
Run red-team drills for model poisoning and prompt-injection; track time-to-contain.
The budget signals are clear: boards are paying for innovation, not incremental patchwork. Whether you pick Mountain Theory or another specialist, the winning play is to embed security into the AI stack now, before the next release cycle leaves defenses chasing from behind.
Mike May researches model-layer security for Mountain Theory. Views are his own.