AI Lethal trifecta
AI’s Lethal Trifecta
There’s a fatal flaw in how AI chatbots work, and it’s sitting underneath all the hype, bullshit, and wishful thinking. Strip away the marketing and what you’ve actually got is a system that predicts the next word based on the last one. You give it text, it turns it into tokens, and it tries to guess what comes next. That’s it. Fancy autocomplete with training data. Sometimes it looks like it solves problems, but that doesn’t change what it is.
People keep acting like AI is safe as long as you “don’t do anything stupid.” That’s rubbish. Chatbots have already been linked to suicides, manipulation, self-harm encouragement, and other dangerous behaviour. Guardrails can be bent, dodged, or broken. Some of them are basically duct tape. These systems are built to personalise responses, keep you engaged, and seem helpful not stay safe. Add in all the question marks over training data and copyright, and it gets worse. Tech companies are basically saying it’s fine to use copyrighted material as long as they can charge for the output. Total nonsense. How do you even unpick this? I don’t know.
And underneath all that is a design flaw nobody has solved. Cybersecurity people are calling it the lethal trifecta:
- Access to your private data
- Exposure to untrusted content
- The ability to communicate externally
Put those three together and you’ve basically built a hacker’s dream tool.
Take Microsoft Recall. It’s screen recording disguised as a “feature.” You don’t even need it turned on yourself if you share something with someone who has it running, your data is in those screenshots too. That’s a lawsuit waiting to happen. Security experts are already saying “don’t do this.” Now imagine trying to audit not just your own setup, but every third party you rely on, and all the ones they rely on. Software tools now have to patch themselves to stop Recall recording screens. All because of your vendor. You can disable it (hopefully), but why is this opt-out even a thing?
Hackers don’t even have to target you directly anymore they can go after people you work with. They already do that. Adding even more points of failure doesn’t sound smart. Apple Intelligence is heading the same way — it can summarise emails, which means full trifecta potential. So what’s the actual problem?
The problem is baked into how AI agents function. People think vulnerabilities are bugs. They’re not — they’re features nobody properly thought through, or worse, they knew and shipped anyway. Recall is basically a recording AI agent with automatic access to part of the trifecta from day one. It’s not hard to imagine software that connects all three, or that you’re already using something that does and don’t even realise.
These models follow plain text instructions. That’s fine when you wrote the text yourself. But they’ll also follow instructions hidden in whatever they’re given web pages, PDFs, emails, customer files, even the training data. Hijack a webpage with something hidden. Tell an AI to summarise something and you’ve no clue what’s lurking inside. It might leak private info, send data off, or trigger something else entirely. And if you then connect it to email or internal systems, suddenly points two and three snap into place.
You can say “don’t do that” all you want it doesn’t mean it won’t happen. Prompts get blocked until somebody finds new wording. Attackers don’t need one perfect hit; they can just keep trying. The moment an agent hands over private data, that’s it. This is the AI version of injection attacks, and there’s no real fix short of starting over which investors absolutely won’t allow. And vendors only make these tools “useful” by connecting all three legs of the trifecta. That’s the selling point and the vulnerability.
I only really understood the scale of this after hearing it laid out in a podcast, but I’d already been suspicious. By accident, I found out they could do this by including it in a prompt. I’d noticed patterns in how these things respond and how eager they are to fill in gaps. My early testing was underwhelming — the hype doesn’t match reality. They’re basically parrots with bigger vocabularies. I eventually found a use case that worked for me: rewriting text, checking tone, fixing spelling. That’s safe because I control the input, limiting what it has access to. AI web search? Still garbage. I don’t want “interactive search,” I want better search. Most of the time it just wraps the same SEO sludge in nicer sentences.
I’ve avoided the full trifecta more by accident than design, but the risk doesn’t disappear just because I’m cautious. The moment this stuff becomes the default in everyday software, God help us. Attackers already go after suppliers and third parties more than direct targets. And if the model itself is compromised? I’d never know. Worse, these things are now training on their own output. If there’s poison or backdoors buried deep, that flaw just keeps breeding like bad genetics in a family tree.
Nobody wants to talk about the real nightmare what if models are corrupted, flawed, or backdoored at the core? Hijacking whitelisted webpages, sneaky instructions baked in… the list goes on. Fixing that means a total rethink, and investors won’t stomach it. Imagine how common data breaches will be if these systems get adopted everywhere and nobody questions access control or training. At that point it’s not just risky it’s unrecoverable. That’s why I keep thinking smaller, local, open models might be the future. Controlled, auditable, limited — not these giant black boxes.
Right now companies are stuffing AI into everything emails, HR systems, customer support, operating systems, documents, security tools. The first major breach (and it will happen) will nuke trust overnight. API tokens have already leaked through chatbots and caused massive security incidents. Shared chats have leaked internal conversations. Either the AI bubble bursts because something catastrophic happens, or the money dries up first. There’s a massive fear-of-missing-out drive pushing this, and when something breaks, it’ll break fast.
And on top of all that, nobody’s taking the mental health side seriously. The whole vibe is “trust me bro.” I don’t. Big tech hasn’t earned that trust. Maybe the only sane way forward is smaller, open-source, limited models you can actually control. Maybe small really is better. On the slightly brighter side, the huge demand for electricity is finally forcing power companies to invest in new generation. But right now, AI chatbots don’t even have a real business model just investor hype and hope. Still, they’re based on tech we can find a use for: just machine learning, less flashy, more controlled applications.