A new study reveals a stark gap in enterprise security: 67% of CISOs report limited visibility into AI usage across their organizations, leaving critical systems vulnerable. Despite AI’s widespread adoption, security leaders are largely dependent on outdated tools and a severe lack of specialized expertise to defend against emerging AI-specific threats. This challenge is not budget-driven but stems from foundational skill and tool deficiencies, as The Hacker News reports from Pentera's 2026 AI and Adversarial Testing Benchmark Report.
AI Adoption Outpaces Security Readiness
AI systems are now deeply integrated across corporate technology, from cloud platforms and identity systems to applications and data pipelines. This widespread deployment, however, comes with fragmented ownership across disparate teams, which has caused centralized oversight to collapse. As a direct result, 67% of CISOs reported limited visibility into how AI is being used across their organization. No respondents indicated they have full visibility.This severe lack of insight means basic security questions often remain unanswered. Security teams struggle to identify which identities AI systems use, what data they access, or how they behave during control failures. Such foundational gaps make effective risk assessment nearly impossible. The expanding use of AI in enterprises is prompting CISOs to rethink their data protection strategies, as field CISO Chris Cochran from the SANS Institute notes.
Organizations must proactively evaluate how new technologies use company data and continuously monitor traffic flow. This stance helps identify if new controls are needed for AI integration, ensuring systems can benefit from evolving vendor solutions.
The Core Challenge: Expertise, Not Funding
While AI security is a frequent topic in boardrooms, the main obstacles are not financial. CISOs identified the lack of internal expertise (50%) as their top barrier, closely followed by limited visibility into AI usage (48%). Insufficient security tools designed specifically for AI systems (36%) also pose a significant challenge. Only 17% cited budget constraints. This indicates a willingness to invest but a critical shortage of specialized skills to evaluate AI-related risks in real environments.AI introduces new behaviors like autonomous decision-making, indirect access paths, and privileged interactions between systems. Without the right expertise and active testing, it becomes difficult to assess whether existing controls are effective. Most companies extend existing security controls to cover AI infrastructure, with a striking 75% of CISOs relying on legacy security tools like endpoint or application security. Only 11% reported having security tools designed specifically for AI, a pattern reminiscent of past technology shifts where organizations adapt existing defenses before tailored practices emerge.





