What 1,571 Connections Revealed About Our Story: The Interlinking Experiment
Published on: January 10, 2026
We asked a simple question: What happens when you stop treating blog posts as isolated islands and start mapping the invisible threads between them?
The answer required 21 specialized Claude Flow agents working in parallel across 227 posts, extracting themes, identifying relationships, and generating relevance descriptions for every connection.
The result: 1,571 meaningful relationships with an average of 6.9 connections per post.
But the numbers only tell part of the story.
The real discovery wasn't the quantity of connections - it was the patterns that emerged when we looked at the whole graph.
The Concept Clusters: Certain ideas act as gravitational centers, pulling other posts into their orbit. Trust Debt connects to 89 posts - more than any other concept. FIM Framework links 73 posts through technical architecture discussions. Drift appears in 67 posts, revealing it as a unifying concern across domains. Unity Principle (S=P=H) threads through 52 posts, often in unexpected contexts.
These aren't arbitrary connections. Each represents a moment where we returned to foundational concepts, sometimes consciously, often without realizing we were building on previous work.
The Hidden Series: The agents detected series we didn't know we had written. The Drift Chronicles (Parts 1-4) was an explicit series. The Trust Debt Saga turned out to be 12 posts telling a single story across months. The AWS Rejection Arc comprised 5 posts processing and building on a pivotal rejection. The Un-Robocall Evolution tracked a methodology as it matured across 8 posts.
Some of these "series" were intentional. Others emerged organically from revisiting ideas. The agents surfaced connections we had felt but never articulated.
The Location Anchors: Certain places appeared repeatedly, grounding abstract concepts in physical reality. Austin, TX appeared in 23 posts (our base of operations). Capital Factory featured in 8 posts (where key incidents occurred). United Nations appeared in 6 posts (the Un-Robocall origin story).
These location anchors create a sense of physical continuity. The ideas have addresses.
The People Threads: Named individuals create narrative continuity. Elias Moosman appears in 31 posts (the author's journey). Benito Fernandez features in 4 posts (the first Un-Robocall session). Aaron Handwerker appears in 3 posts (UN facilitation). Max Tegmark threads through 5 posts (consciousness and physics discussions).
Real people make abstract ideas feel lived-in.
The hardest part wasn't finding connections - it was explaining WHY they matter.
We built a relevance generation system that produces paragraphs like: "If the drift concept resonated with your AI experiences, 'The Speed of Trust' tells the full story. This post shows what happens when drift goes unchecked in organizational change."
Each relevance paragraph attempts to answer: "If you liked X about this post, why should you read Y next?"
This required comparing post pairs - not just matching keywords, but understanding narrative flow. The agents used concept-specific templates combined with contextual analysis. Series connections say "Continue the story..." Concept connections say "If [concept] resonated, [post] goes deeper..." Event connections say "The [event] story continues in..." Location connections say "More from [location]..."
Generic recommendations fail. "You might also like..." means nothing. Specific relevance - "If you were frustrated by X, this post shows the solution" - creates genuine value.
Isolated Posts Are Dead Posts. Posts without connections don't get discovered. The blog list is a graveyard for 100+ posts if that's the only navigation. High-affordance design means: Never expect users to click "back to blog" and browse. Give them forward paths at every decision point.
Themes Beat Tags. Tags are mechanical. Themes are meaningful. "AI-alignment" as a tag tells you nothing. "This post applies AI alignment principles to the $100M deal question" tells you everything. The relevance paragraphs convert tags into context.
The Book Chapters Are Underlinked. Our 6 book chapters (Tesseract Physics series) should be the hub of the knowledge graph. Instead, they're isolated peaks. Action item: Every concept-heavy post should link to the relevant chapter. The book is the canonical source; the blog posts are explorations and applications.
Story Posts Outperform Analysis Posts. Posts with narrative anchors (events, people, locations) generated more meaningful connections than pure analysis posts. Why? Because stories create memory hooks. "The AWS rejection" is a node that everything can hang from. "An analysis of trust dynamics" floats free.
For those curious about the machinery, here's the breakdown. We deployed 21 Claude Flow agents in total. 10 relevance writers handled content-comparison and relevance-generation. 5 context analyzers performed theme-extraction and narrative-analysis. 3 JSON updaters handled data-merge operations. 2 chapter linkers managed book-chapters and cross-reference tasks. 1 swarm coordinator handled overall orchestration.
The Processing: 227 posts analyzed, 1,571 relationships generated, 6.9 average connections per post, approximately 3,000 characters of relevance text per post.
The relationship extraction script detects 25+ concept patterns, 7 locations, 6 major events, and 6 key people. Each relationship is weighted by type. Series connections receive weight 100 (highest priority). Events receive weight 25. Concepts receive weight 20. Tags receive weight 5.
Every blog post now includes four new navigation elements. The Themes section shows the concepts this post explores. Series navigation provides context for multi-part stories. Related ideas offers thematically connected posts with relevance explanations. Story jumps enable location, event, and person-based connections.
Users no longer need to "go back to the blog" and browse. They can stay in the story, jumping between connected ideas, following narrative threads wherever they lead.
This is what high-affordance navigation looks like: Not a list to browse, but a graph to explore. Every post is a node. Every connection has a reason. The reader chooses their path.
We used AI to map the hidden structure of AI-generated content about AI alignment.
The recursion isn't accidental.
The same principles that make blog interlinking work - semantic proximity, contextual relevance, narrative continuity - are the principles FIM uses to make AI trustworthy.
Position becomes meaning. Connection becomes understanding. The map IS the territory.
That's not a metaphor. That's the Unity Principle in action.
The Graph Reveals the Story
227 posts. 1,571 connections. 21 agents. One story. The interlinking experiment proved what the framework predicted: meaning lives in structure, not in isolation. When you map the invisible threads, the architecture speaks. Every connection is a path. Every path is a choice. Now go explore it.
Related Reading
- We Killed Codd, Not God - Why database normalization accidentally broke meaning-position binding
- The Most Interesting Thing I've Read in a Decade - What happens when pattern recognition meets validation
- The k_E Derivation: Five Independent Proofs - The mathematics behind drift measurement
- Metavector Walks and ShortRank - How semantic addressing makes navigation possible
Book Chapters
- Chapter 1: The Unity Principle - S=P=H and why structure is meaning
- Chapter 2: Universal Pattern Convergence - Why the same patterns emerge across domains
- FIM Patent (Appendix) - The 12x12 grid architecture
More on Trust and Grounding
- The Equation That Changes Everything: Trust Debt Revealed - The physics behind why connections matter
- The Speed of Trust - How geometry creates meaning in information systems
- Who Owns the Errors? - The sovereignty question at the heart of AI-assisted analysis
π A | π B | π― C | ποΈ D | π§ E | π F | π G
227 posts. 1,571 connections. 21 agents. One story.
Now go explore it.
Ready for your "Oh" moment?
Ready to accelerate your breakthrough? Send yourself an Un-Robocallβ’ β’ Get transcript when logged in
Send Strategic Nudge (30 seconds)