Sigma Browser Ltd. announced on Friday the launch of a privacy-focused web browser powered by an on-device artificial intelligence model that operates without uploading user data to the cloud.
As major browser companies ride the AI wave, integrating large language models into their platforms, native AI experiences have become increasingly common. Examples include Google's Gemini integration in Chrome and Mozilla's AI features in Firefox. Meanwhile, AI developers have also launched dedicated AI-native browsers such as Comet by Perplexity AI and Atlas developed by OpenAI’s PBC division.
Most of these solutions rely on cloud-based AI systems, sending user queries and requests to remote servers for processing and content generation.
In contrast, Sigma’s Eclipse browser runs a local large language model (LLM) directly on the user's device, enabling full offline operation. All interactions, questions, and personal data remain entirely on the device, eliminating risks associated with third-party access, response manipulation, or data leaks.
"AI has become incredibly powerful, but it's also increasingly centralized and costly," said Nick Trenkler, co-founder of Sigma. "We believe users shouldn’t have to sacrifice privacy or pay recurring cloud fees to access advanced AI capabilities."
The company emphasized that the LLM embedded in Eclipse is uncensored and free from ideological or content-based restrictions, ensuring neutral and unfiltered responses. This design aligns with Sigma’s core mission: giving users complete control over their AI experience without limitations on topics, viewpoints, or expression.
An updated version of the browser also introduces on-device PDF analysis, allowing users to process and extract insights from documents locally without external transmission.
Eclipse is not the first browser to support local LLMs. In 2024, Brave Software Inc. introduced a “bring-your-own-model” feature for its Leo AI assistant, enabling basic integration with locally hosted models. However, this approach often requires technical setup, including installing tools like Ollama or other local inference engines.
To efficiently run mid-sized models—particularly those around 7 billion parameters—users typically need at least 16 to 32GB of system memory. A modern GPU is also essential; while entry-level cards like NVIDIA’s RTX 3060 can suffice, optimal performance demands higher-end hardware such as the RTX 4090. Larger models require even more VRAM and computational power.
By offering a built-in, locally optimized LLM out-of-the-box, Sigma aims to lower the barrier to private AI browsing. The company describes this release as a step toward more transparent, user-controlled artificial intelligence—balancing high performance and advanced functionality with strong privacy protections and accessibility.