A new KPMG study reveals that while 74% of global leaders plan to maintain AI as a top investment priority despite economic uncertainty, three-quarters remain concerned about data security and privacy, exposing a critical gap between AI spending and value realization. Only 11% of organizations qualify as "AI leaders" who see meaningful business value, while the majority struggle with foundational challenges, including data quality, governance, compliance, and risk management that have persisted since early AI adoption.
Source: TechRadar
What to know:
Why it matters:
The research underscores a fundamental tension in enterprise AI adoption: investment enthusiasm without operational readiness creates risk exposure rather than competitive advantage. The shift from generative AI to agentic AI amplifies this challenge; autonomous AI agents require robust governance frameworks and trust mechanisms that most organizations have not yet built. For mid-sized businesses, this study validates the need to prioritize foundational capabilities: comprehensive data governance, proactive risk assessment, and continuous monitoring, before scaling AI deployments, ensuring that increased spending translates into measurable business value rather than expanded security vulnerabilities.
A critical ChatGPT vulnerability allowed attackers to silently exfiltrate sensitive user data through a DNS-based covert channel that bypassed all platform guardrails and security warnings. Security researchers discovered the flaw that exploited the fact that DNS queries were treated as harmless infrastructure, creating a blind spot that enabled data theft without triggering any user alerts or consent prompts.
Source: TechRadar
What to know:
Why it matters:
This incident exposes a fundamental assumption gap in AI security: organizations trust that GenAI platforms prevent unauthorized data extraction, but novel attack vectors continue to emerge. The use of DNS, a protocol designed for basic name resolution, as a data exfiltration channel demonstrates that AI guardrails focused on policy and intent can miss infrastructure-level exploits. For mid-sized businesses deploying ChatGPT across teams, this vulnerability underscores the need for continuous monitoring of AI interactions at the network level, real-time anomaly detection across all data transmission protocols, and proactive risk assessment of GenAI tools before sensitive data enters the conversation.
Protections that work in the background without blocking workflows or slowing teams down.
Small Language Models (SLMs) run directly in the browser or on local environments—nothing sensitive is ever sent to the cloud.
Our platform is built to adapt—whether you're rolling out GenAI, scaling SaaS, or securing hybrid teams.