What Do Porn, AI, and the Job Market Have in Common? Too Much
What Do Porn, AI, and the Job Market Have in Common? Too Much
As a dyslexic writer, AI helps make my writing sound human. That might sound ironic, but it’s true. The promise of this technology isn’t what most people think it is—and in many ways, it’s far more dangerous than the hype suggests. It’s a useful tool, but it has real potential for misuse.
In this post, I want to talk about how I use AI, where it helps, where it fails, and what that means for the world we’re building.
Using AI as a Dyslexic Writer
I use AI mostly for grammar and spellchecking. As someone with dyslexia, writing can get messy. Grammar often feels like a maze of arbitrary rules. Spell checkers miss things or give poor suggestions. Other tools are either too rigid or overwhelming.
I still do what I can as I type, but I no longer spend hours battling through it alone. Now I drop text into AI and let it act as a second pair of eyes. Sometimes my prompt is just: “Grammar check the following text.” I’ve considered switching to “proofread,” but most days I just want to get the job done.
AI doesn’t replace me—it’s not a ghostwriter. But I’d be lying if I said it always sounds like me. That’s the trade-off. The grammar might be perfect, but something often gets lost. I still need to tweak it before publishing, because the final voice needs to be mine.
Writing Is a Process, Not a Straight Line
Most of my posts start with one idea. Then I scrap it, jump to another, and rework everything halfway through. It’s chaotic. Sometimes I don’t even know where I’m going until I get there. Plenty of drafts never make it to the end. That’s part of the process.
I’ve even got two unfinished short stories. One’s around 10–12k words—a rough first draft that badly needs editing. Without AI, that kind of work would take me months. It’s better than the old tools I used, but it still needs a human to shape it properly.
AI Misses the Point—Literally
I’ve tried using AI to write stories, but they often spiral out of control. The model, the prompt, the purpose—all of it matters. General-purpose models aren’t good at specialist tasks. You can’t turn a screwdriver into a hammer and expect it to work. Prompting is a skill of its own, and I’m still figuring it out.
AI grammar is usually too perfect. Compared to my earlier writing, it’s obvious. The phrasing can sound unnatural and robotic. Still, it does a solid job at fixing spelling. I’ve written absolute rubbish and somehow it still guessed the word I meant. That said, it misses nuance.
One time I had a typo in the word “nuance.” The AI rewrote the sentence to highlight the error—but in doing so, it lost the original point. It changed the meaning entirely. I only caught it when I read it back. I rewrote that section five times.
AI Doesn’t Handle Chaos Very Well
I tend to jump from one idea to another, then loop back again. I’m not great at transitions. AI helps keep things readable—but only to a point. It doesn’t always understand what I’m trying to say. Depending on how I prompt it, the flow can feel forced.
For example, I asked Copilot to extract ingredients from a photo of my shopping list. The thread went on so long that it started inventing items and suggesting recipes instead. I caught the shift—but someone else might not. Worse still, there was no human in the loop. That kind of blind automation already exists in other areas too, especially in business. No oversight until it’s too late.
Copilot isn’t ready for prime time. That moment made me think twice about trusting it. I’ve trained myself to question AI outputs—but too many people don’t. Critical thinking is lacking. Even people with degrees fall into that trap. And that worries me.
The Job Market Is Already Broken
We’ve been automating hiring for years—and doing it badly. I get the pressure to process hundreds of applications cheaply. Most businesses are small. I don’t blame them for outsourcing recruitment. But I also don’t buy the hype around productivity gains.
Honestly, I had to rewrite this section because the AI didn’t understand what I meant the first time.
Often, the people doing the filtering don’t understand what the job actually requires. Automated systems end up mismatching people and roles. Now we’re flooded with AI-generated CVs and cover letters—“AI slop” from desperate applicants just trying to get noticed.
It’s another filter problem. Some qualified people are overlooked because their CV doesn’t fit the algorithm. Same thing happens with dating apps: you might filter out someone great just because they didn’t check the right boxes.
If it takes hundreds of applications to get a single interview, something’s broken.
Bias, Flawed Data, and Stolen Work
AI is only as good as the data it’s trained on—and that data is often flawed or outright stolen. Models trained on each other reinforce bad habits. That creates a cycle of baked-in errors.
Take image editors that lighten Black skin because they were trained mostly on white faces. Or the elephant in the room: copyright. Many companies either don’t know what data they used—or worse, they know and don’t care.
You can’t train on copyrighted material and then claim your output is protected. That “trust me bro” attitude while profiting off someone else’s work really rubs me the wrong way. We’ve become the product in more ways than one.
Do One Thing Well—Not Everything Badly
Right now, companies are chasing general-purpose models that aren’t very good at anything in particular. A Chinese startup called DeepSeek took a smarter approach: several specialist models, with a general model directing the task. That made sense—though there’s been no update since, so who knows?
Meanwhile, Western companies are burning billions chasing marginal gains. Huge energy and compute costs, for what? Eventually, the money will run out, and efficiency will matter. But for now, it’s endless scaling with no direction.
As always, the real breakthroughs don’t make headlines. The flashy demos get the clicks. The rest quietly shapes the future.
Machine Learning Got a Makeover
We used to call it machine learning. Now it’s “AI” because branding helps reel in investors. Google still relies on machine learning for search and YouTube—but even those feel like they’ve lost their edge.
Part of that is profit-seeking. Companies want to extract value at any cost. But now they’re dealing with a tidal wave of AI-generated content—slop that might end up breaking their own platforms.
Some models are genuinely impressive. Most are built on broken foundations.
Regulation Is Always Playing Catch-Up
Regulation moves too slowly. By the time rules are in place, the tech has already changed. And when regulation does land, it brings trade-offs: cost, delays, sometimes stifling innovation. Sometimes that’s the point. Other times, it’s just poor design.
The worst outcome is regulatory capture—when the people writing the rules used to work for the companies they’re supposed to regulate. That locks out competition and builds in bias.
Maybe it’s time we took some power back from big tech. Strict rules don’t always deliver—but without them, we risk giving all control to a handful of giants.
We’ve Done This Dance Before: Online Safety, Porn, and Policy
The UK’s Online Safety Act kicks in on July 24th. It introduces age verification for social media and porn. We’ve seen this before. What happened last time? Users just moved to platforms that didn’t comply.
Every bit of friction makes people less likely to engage. Shady sites flourish. And while real harms—like deepfake porn—do need tackling, these checks won’t work. They’re expensive and hard to enforce.
My solution? Proper sex education. No opt-outs. Teach relationships, consent, porn—openly, in schools. It might make some people uncomfortable, but it’s necessary.
We should also build a healthier, more ethical UK porn industry. Legalise sex work. Fund moderation. Talk about it honestly instead of pretending it doesn’t exist.
If we must have age checks, put the burden on platforms like Google and Apple—not small service providers. Maybe a one-time verification system would be better. I don’t know. But I do know the current plan is flawed, and AI-generated content is only making things more complicated.
Final Thought
AI helps me write. It helps me communicate in ways that used to take me hours. But it’s far from perfect. It misses the point, rewrites my meaning, and strips away my voice if I’m not careful.
It’s a tool—and tools need oversight.
That’s true for me, and it should be true for the systems shaping our future.
That’s my political trauma dump for the day. I’d rather focus on the real issues than get lost in the noise and performance.