Big Tech Proved My Point

So we have all seen the furvor around AI and how "revolutionary" it is. Now we can waste immense amounts of power so AI can tell us that there are two or three instances of the letter 'a' in the word 'strawberry'; now that's revolutionary! Sarcasm aside, AI has revolutionized being exploited for personal data, so obviously it would be a good idea to prevent AI from being implemented deep into your system. Big Tech Companies such as Open AI, Apple, Microsoft, and Google seem to have other plans: AI will be your digital assistant, embedded deep into your system and your browser, trusted to answer questions and even browse the internet for you. How could this go wrong? Let's go by each company and see what traditional exploits have looked like.

Apple

Apple has gotten hit the hardest with AI vulnerabilities. Mid-2025 Apple Intelligence got hit with Sploitlight, a bug that allowed for TCC bypass. This bug basically broke a big part of the security model of Mac OS. Mac OS has a feature Transparency, Consent, and Control (TCC), it's basically a Mandatory Access Control but enabled universally. When working properly, this is critical piece of the Apple security model on desktop. However, Apple Intelligence has different plans: by caching user files without regard for TCC, all it took was a malicious Spotlight plugin to completely bypass this part of the security model.

Google

Google is well-known in the field for creating secure computing products, such as Chrome OS and Android. Chrome OS is one of the most secure desktop computing platforms out there, and Android is a highly secure mobile computing platform. However, thanks to some decisions in Gemini implementation, you may want to rethink your purchase of a Chromebook or Google Drive subscription. Google's Chrome browser is now getting Gemini integration. You might be tempted to think "big woop, it's a chatbot and image generator. What can it possibly do?" Well for starters, Gemini is fully integrated into the GDrive ecosystem already, and has been for some time. Chrome is also rolling out an 'autobrowse' feature, where AI browses the internet for you. This means AI will be parsing information from webpages, interacting with Javascript apps, entering data, etc. Gemini previously had a data disclosure vulnerability known as GeminiJack. It does exactly what it sound like: it uses prompt injection to jack your AI agent, allowing for data disclosure. The best part, it required no user interaction! This is splendid! Let's use an AI that had previous zero click data disclosure bugs to browse the internet! WHAT COULD GO WRONG? Jesus-fucking-Christ Google, for a company with brilliant security researchers you do stupid shit sometimes.

Microsoft

Microsoft needs no introduction, after all you may be reading this on Windows, own an XBox, or use any of their Office or online suites of applications. Microsoft has put in a great effort over the years of enhancing Windows security for corporate users, and has recently been putting a lot of love into their baby: Copilot. Copilot has been deeply integrated into Windows 11 for a hot minute, becoming the new Clippy/Cortana/Bonzi Buddy/etc. In this case, Bonzi Buddy may be a more apt descriptor, as Copilot can be taken over by interaction with a URL. Not just any URL, a valid Microsoft URL. This is how we got Reprompt, a data disclosure vulnerability caused by users clicking a copilot.com link. From here, attackers can basically reprompt Copilot as much as they'd like to exfiltrate data without the user's knowledge. Microsoft definitely dropped the ball here, but the ball keeps on rolling because Microsoft is now rolling out an agentic AI mode for the Edge Browser. You know, the AI program that can get hijacked with a URL? Let's slap that shit into an agentic browser mode! At least in Apple's case, they could claim to not have known Sploitlight would happen since they already tried to harden the access controls for Spotlight plugins, but Google and Microsoft seem to want to make things easier for bad actors. I almost have to wonder if AI is a sort of backdoor we're all being coerced into using.

Open AI

Personally, I hate OpenAI. I hate ChatGPT and how it affirms your beliefs specifically, and I personally think ChatGPT is the worst AI model to use for your mental health. Somewhere out there, the dumbest person you know is being told "you're absolutely right" by ChatGPT, and you know they're getting a total ego trip from it. So, as much as it PAINS me to say this, OpenAI seems to be taking steps to try and hardeng Atlas. Do I think this will work? It may protect against some types of prompt injection, so it isn't a fruitless endeavor, but I do question the security model of Atlas and the use of AI in targeting their AI browser. It sounds like they're throwing AI at their AI and expecting things to work out with little outside input, when we need human researchers testing these things as well. AI is formulaic, you can't expect it to find novel prompt injections that humans can come up with. Atlas also lacking any sort of outside mitigation for potential data disclosure from prompt injection is concerning, but I digress. OpenAI may be doing SOMETHING to prevent prompt injection from wrecking your day. However, even they have to admit that there is no solution for prompt injection at this time.

What is the point here?

The point of this website always has been, and always will be, to point people towards the most secure/trustworthy open source software. I specify open source because companies can build some of the most secure products, only to completely fuck it up chasing a trend-looking at you Google and Apple. AI is the hot new thing it seems, and everyone and their dog is using it. While a lot of these AI slop products have proved of little consequence to the average user's security, "agentic browsers" can absolutely cripple your protections. What point is there in using sandboxing and MAC if AI can completely break your security model? As it stands, most browsers out there are including some "agentic" features, including favorites in the privacy community like Firefox and Brave. As of right now, Chrome on Android has not implemented Gemini yet, so you don't have to change your habits in this manner. However, when the time comes, Vanadium by the Graphene OS Team will most likely be the recommended browser. On desktop, the most secure option without AI is going to be Trivalent from the Secureblue operating system. The best alternative for Trivalent on non-Fedora systems will most likely be GNOME Web for reasons that will be explained in a future article. Web uses the Webkit rendering engine, inheriting many of the security features used in browsers like Safari. It is one of the more secure options on Linux, and the only browser I can find that has proper sandboxing while running in the flatpak sandbox. This makes Web an easily accessible, yet secure option for people who can't use Trivalent.

Another thing people need to rethink is how much they trust AI to run on their system. Thanks to Sploitlight, the possibility of AI messing with your security model is very real. I would consider it inadvisable to use Apple systems due to their deep AI integration, and to instead use Graphene OS coupled with one of these desktop operating systems:Qubes OS, Secureblue, Kicksecure, or Tails OS. These systems do not come with deep AI integration, rather relying on a third party app to provide less privileged AI access. On Android, I recommend Edge AI Gallery with one of the Google Gemma models, and with the Network permission revoked after the model is downloaded. On desktop I initially wanted to recommend Alpaca by Jeffser due to it running in a flatpak sandbox, and for it being generally user friendly. However, in my experience Alpaca was very unstable. At the moment, I recommend llamafile by Mozilla. I know, I'm praising Mozilla for once, do pigs fly yet? Jokes aside, llamafile actually surprised me because it has decent sandboxing features while remaining versatile. Llamafile uses SECCOMP to sandbox the program on Linux, and pledge to sandbox the program on OpenBSD. I absolutely love that llamafile uses built-in, native sandboxing features, but the parting piece is that it does that while remaining as portable as an appimage. All you have to do is run the llamafile in a terminal, and you have a CLI-based chat window to talk to your preferred model. If that is too daunting, passing the --server-v2 flag allows for llamafile to host a GUI that can be opened using any web browser. Llamafile also can parse files to increase its knowledge for your prompt, so it is a very capable assistant. While llamafile is great, there is one flaw in the security. Llamafile's security prevents it from being taken over by an attacker and leaking information, but it doesn't protect your system from a maliciously crafted llamafile. Make sure you're downloading a legitimate llamafile from Mozilla before using.

return to home