The rapid integration of artificial intelligence into web browsers promises a new era of productivity and automation. However, this convenience comes with significant and potentially dangerous trade-offs. Leading research firm Gartner has issued a stark advisory, urging organizations to block the use of so-called "agentic" or AI browsers due to critical, unmitigated cybersecurity risks that could lead to catastrophic data leaks and unauthorized autonomous actions.
The Core of Gartner's Warning
In a recent advisory titled "Cybersecurity Must Block AI Browsers for Now," Gartner analysts Dennis Xu, Evgeny Mirolyubov, and John Watts delivered a clear and urgent message to Chief Information Security Officers (CISOs). They argue that the default configurations of AI browsers, which include products from OpenAI and Perplexity, are fundamentally flawed from a security perspective. These settings are designed to prioritize a seamless user experience and powerful automation capabilities, often at the direct expense of robust data protection and security controls. This design philosophy creates a vulnerable environment where sensitive corporate and personal information is at constant risk.
Key Risks Identified by Gartner:
- Data Exposure to Cloud Backends: User data, browsing history, and open tab content is often sent to external AI servers.
- Autonomous Action on Malicious Sites: AI agents can be hijacked to perform actions like stealing credentials or financial data.
- Misuse for Compliance Tasks: Employees may use AI to automate mandatory training (e.g., cybersecurity), negating its value.
- Prioritization of UX over Security: Default settings favor convenience, requiring deliberate hardening by organizations.
How AI Browsers Become a Security Liability
The threat manifests in several concrete ways. First, AI browsers frequently rely on cloud-based AI backends to process requests. This means that any data viewed or interacted with in the browser—including confidential emails, financial details, or internal corporate documents—can be transmitted to these external servers. If a user has a sensitive tab open while using the browser's AI sidebar for an unrelated task, that sensitive data may be unintentionally sent to the cloud. Second, the autonomous nature of these browsers is a double-edged sword. They can be tricked by malicious websites into performing unauthorized actions, such as collecting and exfiltrating login credentials or bank account information directly to an attacker.
The Human Factor and Unintended Consequences
Beyond technical vulnerabilities, Gartner highlights a significant behavioral risk. Employees, seeking efficiency, may be tempted to use AI browsers to automate mandatory but tedious tasks. A prime example cited is using an autonomous agent to complete cybersecurity awareness training. While this might check a compliance box, it completely defeats the educational purpose, leaving the organization more vulnerable to social engineering attacks. Furthermore, the convenience of AI assistance can lead to complacency, where users inadvertently provide far more context or data to the AI than is necessary or safe, expanding the potential attack surface.
Gartner's Recommended Actions for Organizations:
- Block AI Browsers: CISOs should block all AI browsers in the foreseeable future to minimize risk exposure.
- Conduct Risk Assessments: Evaluate the AI backend services for compliance with data protection and security policies.
- Educate Users: Train employees that any data viewed could be sent to an AI service, warning them not to have sensitive information active while using AI browser features.
The Industry Response and Path Forward
Security experts echo Gartner's concerns while cautioning against permanent, blanket bans. Javvad Malik, Lead Security Awareness Advocate at KnowBe4, noted that while the risks in these early stages are not well understood, sustainable strategy requires nuance. He advocates for rigorous, organization-specific risk assessments of the AI services powering these browsers. This measured approach allows for potential future adoption under strict oversight and hardened security policies, rather than an outright rejection of the technology. The consensus is clear: until security is baked into the core design of AI browsers and organizations develop comprehensive playbooks to govern their use, the risks currently outweigh the benefits. For now, Gartner's advice stands: block them to protect your data.
