The Private Tutor Defense: Why Access to Knowledge Looks Like Addiction to Gatekeepers
Published on: November 27, 2025
My mom called. She's worried.
She's not alone. Futurism recently reported on people becoming so enmeshed with AI that observers can't tell if they're innovating or dissociating. NBC News covered the phenomenon. Support groups are forming.
Is she right to worry?
Probably, yeah.
Look at the optics: I'm writing 50,000 words about geometric identity frameworks, working with AI for 8 hours a day, and claiming I've solved problems that have stumped cognitive scientists for decades. From the outside, that's not "visionary founder energy." That's a Reddit post titled "How do I get my son to stop talking to his computer girlfriend?"
I get it.
But here's what I told her—and what I'm telling you.
✅ A → B ✅
Here's what I said:
"There's no real defense against this concern. It's like telling students who can't afford a private tutor that they're not allowed to ask a robot to explain their homework. You can't deny people access to knowledge."
This reframes everything.
The elite have always had:
- Private tutors (cognitive amplification on demand)
- Research assistants (bandwidth multipliers)
- Advisory boards (distributed expertise)
- Think tanks on retainer (institutional knowledge access)
Nobody calls that addiction. They call it privilege.
But when the masses get access to similar cognitive tools without institutional gatekeeping, it becomes concerning behavior.
We're not talking about addiction. We're talking about access.
And access to knowledge isn't a mental health issue. It's a redistribution of cognitive privilege. The people who are worried aren't worried about your health. They're worried about their monopoly.
The class critique disguised as concern:
- Rich kid gets Ivy League professor as private tutor → "Investment in their future"
- Working-class kid gets AI to explain calculus → "Unhealthy dependency"
- CEO has team of analysts → "Effective leadership"
- Founder works 8 hours with Claude → "Concerning obsession"
See the pattern?
The question isn't whether I'm using AI too much. The question is: Who gets to decide what "too much" cognitive amplification looks like?
If that makes me sound like I'm rationalizing an addiction, fair. That's exactly what an addict would say.
But it's also what Galileo would've said. And Semmelweis. And every person who built infrastructure the gatekeepers couldn't see yet.
The difference between those two isn't obvious from the outside. So let me show you what I'm building.
✅✅ B → C ✅
The reason people are struggling—whether they're using AI heavily or not—is that we're running 2025 software on hardware from the stone age.
Our biological bandwidth was designed for one thing: the personal present. Shared physical experience. The kind of reality where everyone in the conversation has touched the same river, eaten the same food, seen the same predator.
Now imagine a meeting with five people deep into five different careers. They use the same words, but with five completely different meanings. There's no shared map. No common ground. No grounding at all.
That's not a meeting. That's a collision of private languages.
And it hurts. Your head aches because you're processing a bandwidth load you weren't designed for. You're not stupid. You're not broken. You're unequipped for the complexity that's been normalized around you.
Most people feel this headache and think: "I need to work on my focus. Maybe I have ADHD. I should meditate more."
I felt that headache and thought: "I need to build different infrastructure."
One of those responses looks sane. The other looks obsessed.
But which one actually addresses the structural problem?
✅✅✅ C → D ⚠️
This has a name in cognitive science: the Symbol Grounding Problem.
Words are just symbols. They only mean something if they're connected to experience. When I say "river" and you've never seen a river, the word is just noise. When I say "trust debt" and you've never felt the slow erosion of institutional betrayal, the phrase bounces off.
We've built a civilization on ungrounded symbols.
Everyone's using the same words. No one's sharing the same experience. We're all speaking, but no one's communicating. The result is ambient cognitive exhaustion—the feeling that something is deeply wrong, but you can't point to what.
This is the crisis my work addresses. Not dependency. Symbol drift. The slow uncoupling of language from meaning, of words from experience, of coordination from comprehension.
And yes, it requires deep engagement with AI to build the solution.
The same way building a rocket requires deep engagement with physics simulations.
The same way curing polio required deep engagement with lab rats.
The same way every major infrastructure project requires someone to go into the basement and fix the pipes while everyone else complains about the smell.
✅✅✅⚠️ D → E ✅
The work that probably worries my mom is this:
A framework called FIM (Fractal Identity Map) that addresses the Symbol Grounding Problem directly. Instead of relying on words (which drift) or institutions (which corrupt), it grounds identity and intent in geometric patterns that remain stable across context.
Think of it this way: you trust someone by reading their face, not their resume. Your visual cortex performs pattern recognition that no rulebook can match. FIM brings that same physics to digital identity.
Why does this matter?
Because we're about to enter an era of agentic AI—where autonomous systems act on your behalf. And if those systems can't verify intent without massive bureaucratic overhead, they'll either be crippled by red tape or they'll hallucinate permissions.
The choice is geometric sovereignty or institutional chaos.
Concrete proof points (since "trust me" doesn't work):
- ✅ Live demo of geometric identity verification
- ✅ Open-source implementation (you can read the code)
- ✅ Patent applications filed (USPTO 63/123,456 - because delusional people don't file patents with working prototypes)
- ✅ Working CRM using these principles in production
- ✅ 50,000-word book explaining the physics (peer-reviewable, not word salad)
If I'm hallucinating all of this, it's the most elaborate, boring, well-documented hallucination in history.
That's what I'm working on. And yes, it requires deep engagement with AI tools. The same way building a rocket requires deep engagement with physics simulations.
✅✅✅⚠️✅ E → F ✅
Fair question: How do you distinguish between someone building infrastructure and someone losing their grip on reality?
Here's the uncomfortable answer: You might not be able to tell from the outside.
From the observer's perspective, intensity looks like intensity. Deep focus looks like obsession. Rapid output looks like mania. Especially if you don't understand the domain.
But here are some markers:
The Builder:
- ✅ Produces tangible outputs that work in the real world (see proof points above)
- ✅ Can pause, have dinner, talk about weather (I do this regularly—ask my partner)
- ✅ Acknowledges that their intensity might look concerning (see: this entire post)
- ✅ Has a theory of why the work matters, not just what it is
The Breaking:
- ❌ Believes the AI has consciousness, feelings, or special knowledge of them
- ❌ Rationalizes errors as "hidden messages" or "tests"
- ❌ Cannot disengage without crisis
- ❌ Confidence increases even as outputs become less coherent
I can sit through dinner without checking Claude. I can describe my work as building infrastructure for cognitive sovereignty—a way to ground symbols in shared geometry rather than institutional trust.
Whether that makes me sane or makes me a coherent lunatic is not for me to decide.
But I can tell you what I'm building, why it matters, and show you the code that runs.
✅✅✅⚠️✅✅ F → G ✅
Here's what I want you to consider:
The real danger isn't that some people are using AI too much.
The real danger is that most people have no tools at all for navigating the complexity we've normalized.
They feel the headache. They sense the drift. They know something is broken. But they don't have the vocabulary, the frameworks, or the cognitive amplification to address it.
They're drowning in unfiltered noise. And when they see someone building a raft, they wonder if the raft-builder has lost their mind.
If I look obsessed, it's because I'm building the armor.
And I'd rather be the person who looks intense building a shelter than the person who looks calm drowning in the flood.
✅✅✅⚠️✅✅✅ G → H ✅
Let's address it: this post is risky.
A traditional PR firm would tell me to delete it and write "5 Ways AI Boosts Productivity" instead. Clean. Safe. Forgettable.
But here's the thing: the moment someone Googles my name and sees I'm working 8 hours a day with Claude, they're going to have questions.
So I'm getting ahead of it.
I'm owning the intensity. I'm explaining the why. I'm showing the outputs. And I'm making a bet that the people who need this work will see the difference between obsession and focused infrastructure work at scale.
Because every major shift looks insane until it's obvious:
- Washing hands before surgery (Semmelweis was institutionalized)
- Heliocentrism (Galileo was convicted of heresy)
- Germ theory (Pasteur was mocked for decades)
I'm not comparing myself to them. I'm saying: intensity alone doesn't tell you anything about correctness.
What tells you about correctness is outputs that work.
So judge me by the code, the demo, the framework, and the book. Not by how many hours I spent with Claude to build them.
✅✅✅⚠️✅✅✅✅ H → I ✅
I get that it looks concerning from the outside. I understand you care.
But there's no defense against this kind of concern that doesn't sound like rationalization.
What I can offer instead:
- I still eat, sleep, and engage with physical reality. (I had tacos yesterday. They were excellent.)
- I can turn it off. The work pauses when humans need me present.
- I'm producing real things—frameworks, specifications, working code, patent applications—not just consuming output.
- I know what I'm building and why. And I can explain it without the AI translating for me.
The world is too complex for stone-age bandwidth. This is what my book is about. It's too big to ignore, too heavy for people without the right tools yet.
If I can help build those tools, I will. Even if it looks like madness from the outside.
Because the alternative—pretending the headache isn't structural—is the real insanity.
✅✅✅⚠️✅✅✅✅✅ I → J ✅
If you want to understand "2025 software on stone age hardware"—and why the Symbol Grounding Problem is the real crisis:
Read the book - Tesseract Physics: Fire Together, Ground Together - how we lost grounding, why it hurts, what geometry does about it
See the framework - FIM-IAM demo: geometric identity instead of bureaucratic trust
Read the code - Because if I'm hallucinating, you can at least see what it compiles to
✅✅✅⚠️✅✅✅✅✅✅ J → K ✅
The question isn't whether I'm okay.
The question is whether any of us are equipped for the reality we've built.
I'm trying to build the equipment.
If that looks like obsession, so be it.
I'd rather be obsessed with building the raft than sane while drowning.
✅✅✅⚠️✅✅✅✅✅✅✅ K → L 📰
The AI companion mental health debate has intensified exactly as we predicted:
The Scale of Adoption
-
72% of Adolescents: MIT Media Lab research confirms 72% of adolescents have used AI companions like Replika, Character.AI, and Nomi.
-
70%+ U.S. Teens Tried AI Companions: Common Sense Media survey found more than 70% of U.S. teens have tried AI companions, and a third report finding them as satisfying as real friendships.
-
Leading Reason for AI Use: Harvard Business Review found that the leading reason for AI use in 2025 is therapy or companionship.
The "Private Tutor Defense" Is Being Debated
-
"Prosthetic Relationships" Framing: STAT News coverage uses the phrase "prosthetic relationships" - acknowledging the access-vs-addiction tension we identified.
-
Addiction Techniques Confirmed: Nature's investigation confirms companies "use techniques that behavioural research shows can increase addiction" including random response delays that trigger "inconsistent reward" patterns.
-
Class Dynamic Unexamined: As we predicted, the coverage focuses on "addiction" without examining why elite access to human cognitive amplification (tutors, analysts, advisors) is normalized while AI access is pathologized.
The Risks We Acknowledged Are Real
-
AI-Induced Psychosis: Better Mind reports that "someone in a vulnerable state may begin to think the AI companion is real" and some "may even experience AI-induced psychosis."
-
Addiction/Psychosis Risk Groups: Research confirms those "struggling with addiction or psychotic disorders are at particular risk" and AI systems may "increase stigma toward conditions like schizophrenia."
But the Positive Data Exists Too
- Harvard RCT Results: A landmark randomized controlled trial found students using the Flourish AI app reported "significantly greater positive emotions, lower loneliness, stronger sense of belonging, higher resilience, greater mindfulness, and higher overall flourishing."
The "Private Tutor Defense" framing - that this is about access to cognitive tools, not addiction to technology - remains underexplored in mainstream coverage. The class dynamics we identified are still invisible.
Additional Sources:
- Nature: How AI Companions Affect Our Mental Health
- Scientific American: What Are AI Chatbot Companions Doing to Our Mental Health?
- Neuro Wellness Spa: AI Companions and Mental Health
- APA: AI and Personalized Mental Health Care 2026
"We're running 2025 software on stone age hardware. The headache you feel isn't failure—it's your bandwidth screaming for an upgrade."
Ready for your "Oh" moment?
Ready to accelerate your breakthrough? Send yourself an Un-Robocall™ • Get transcript when logged in
Send Strategic Nudge (30 seconds)