We Asked a Local AI to Review Our Book. Running on a MacBook. No Cloud. No API Costs.

Published on: January 7, 2026

#ollama#local-ai#book-review#tesseract-physics#llama#privacy#self-hosted
https://thetadriven.com/blog/2026-01-07-ollama-reviews-tesseract-physics-local-ai
Loading...
A
Loading...
πŸ¦™The Experiment: Local AI Book Review

What happens when you ask a 1.3GB language model running entirely on your laptop to review a 7,765-line physics book?

We found out.

The model is small but mighty. And its feedback was surprisingly useful.

πŸ¦™ A β†’ B πŸ”§

B
Loading...
πŸ”§The Exact Method We Used

Step 1: Check Available Models

ollama list

Output:

  • NAME: llama3.2:1b
  • SIZE: 1.3 GB
  • MODIFIED: 4 days ago

Step 2: Extract Book Sections

The full book is 7,765 lines. Too large for the model's context window. We chunked it:

# Extract preface (first 240 lines)
head -240 public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-preface.md

# Extract Introduction Section 1 (lines 255-909)
sed -n '255,909p' public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-intro1.md

# Extract S=P=H chapter (lines 3000-3500)
sed -n '3000,3500p' public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-section3.md

Step 3: Send to Ollama with Review Prompts

πŸ¦™πŸ”§ B β†’ C πŸ“

C
Loading...
πŸ“The Exact Prompts We Used

Preface Review Prompt:

cat /tmp/book-preface.md | ollama run llama3.2:1b \
  "You are a book reviewer. Please review this preface from
   Tesseract Physics - Fire Together, Ground Together.
   Evaluate: 1) Hook effectiveness, 2) Clarity of thesis,
   3) Writing quality, 4) Areas for improvement.
   Be specific and constructive."

Introduction Review Prompt:

head -200 /tmp/book-intro1.md | ollama run llama3.2:1b \
  "Review this academic text about database normalization
   and AI alignment. What are the main claims?
   Rate the writing 1-10. List 3 improvements needed."

S=P=H Chapter Prompt:

head -150 /tmp/book-section3.md | ollama run llama3.2:1b \
  "Summarize the main ideas in this text about physics
   and information theory. What is the author arguing?"

Technical Chapter Prompt:

head -200 /tmp/book-section4.md | ollama run llama3.2:1b \
  "Review this technical chapter. Rate 1-10: clarity,
   originality, practical value. What makes this
   compelling or not?"

πŸ¦™πŸ”§πŸ“ C β†’ D ⭐

D
Loading...
⭐The Verdict: Section-by-Section Scores

Preface Scores:

  • Hook Effectiveness: 7/10
  • Clarity of Thesis: 9/10
  • Writing Quality: 8/10
  • Overall: 8/10

Introduction Scores:

  • Writing Style: 8/10
  • Clarity and Concision: 7/10
  • Relevance and Timeliness: 9/10
  • Data and Evidence: 6/10

Technical Chapter Scores:

  • Clarity: 9/10
  • Originality: 8.5/10
  • Practical Value: 9/10
  • Overall: 8/10

πŸ¦™πŸ”§πŸ“β­ D β†’ E πŸ’ͺ

E
Loading...
πŸ’ͺWhat Ollama Said We Did Right

Preface Strengths:

  • The metaphor "We Killed God" effectively grabs attention
  • Thesis is clearly stated and easy to follow
  • Engaging and conversational tone with touches of humor
  • "We didn't kill Codd out of malice" works well

Introduction Strengths:

  • Engaging narrative with conversational tone
  • Clear structure with logical flow
  • Emotional resonance - conveys developer frustration effectively
  • Topic is highly relevant and timely

Technical Chapter Strengths:

  • Clear, concise, well-defined concepts
  • Innovative approach combining neuroscience, psychology, and AI
  • Examples and analogies help illustrate complex concepts
  • Valuable perspective on brain function and information retrieval

πŸ¦™πŸ”§πŸ“β­πŸ’ͺ E β†’ F πŸ”¨

F
Loading...
πŸ”¨What Ollama Said Needs Work

Preface Issues:

  • Hook is somewhat simplistic - could provide more depth
  • Text jumps abruptly between topics
  • Some sentences feel forced or wordy
  • Ends abruptly without summarizing main points

Introduction Issues:

  • Assumes familiarity with technical terms (C3 alignment, referential constraints)
  • Relies heavily on anecdotal examples
  • Numbers like "35M fine" lack supporting context
  • Doesn't adequately discuss alternative approaches

Technical Chapter Issues:

  • Lacks concrete data or research to support claims
  • Relies heavily on theoretical frameworks
  • Some specific measurements need citation
  • Examples may not apply to all contexts

πŸ¦™πŸ”§πŸ“β­πŸ’ͺπŸ”¨ F β†’ G πŸ“‹

G
Loading...
πŸ“‹Ollama's Improvement Recommendations

High Priority:

  • Add technical definitions for C3 alignment, referential constraints, S=P=H upfront
  • Cite concrete data sources for the 35M fine and 0.3% drift claims
  • Add transitional phrases between sections for smoother flow

Medium Priority:

  • Address common counterarguments ("What about data quality?" "NoSQL alternatives?")
  • Break long sentences into shorter, clearer ones
  • End preface with clear summary and call-to-action

Lower Priority:

  • Balance stories with research citations
  • Tone down emotive language where appropriate
  • Provide more specific measurement methodology

πŸ¦™πŸ”§πŸ“β­πŸ’ͺπŸ”¨πŸ“‹ G β†’ H 🧠

H
Loading...
🧠The Model Understood the Core Thesis

The most important validation: did a small local model understand what the book is actually about?

That's exactly right. A 1.3GB model running locally grasped the central argument:

  • Semantic meaning should match physical storage
  • Position encodes meaning
  • This reduces memory access time
  • Cache alignment follows semantic alignment

πŸ¦™πŸ”§πŸ“β­πŸ’ͺπŸ”¨πŸ“‹πŸ§  H β†’ I πŸ”’

I
Loading...
πŸ”’Why Local AI Review Matters

Privacy:

  • No book content sent to cloud APIs
  • No risk of training data leakage
  • Complete control over intellectual property

Cost:

  • Zero API costs
  • Unlimited reviews
  • No token counting anxiety

Speed:

  • Each review took 10-30 seconds
  • No rate limiting
  • No API queue waiting

Reproducibility:

  • Same model, same prompts, same results
  • Version controlled
  • Shareable methodology

πŸ¦™πŸ”§πŸ“β­πŸ’ͺπŸ”¨πŸ“‹πŸ§ πŸ”’ I β†’ J πŸš€

J
Loading...
πŸš€Try It Yourself

Install Ollama:

# macOS
brew install ollama

# Or download from ollama.ai

Pull the Model:

ollama pull llama3.2:1b

Review Your Own Content:

cat your-document.md | ollama run llama3.2:1b \
  "Review this text. Rate 1-10: clarity, originality,
   practical value. List 3 specific improvements."

For Larger Documents:

# Chunk it first
head -200 your-document.md | ollama run llama3.2:1b "Your prompt here"

πŸ¦™πŸ”§πŸ“β­πŸ’ͺπŸ”¨πŸ“‹πŸ§ πŸ”’πŸš€ J β†’ K 🎯

K
Loading...
🎯The Bottom Line

A 1.3GB model running entirely on a laptop gave us:

  • 8/10 overall score
  • 9 specific improvement recommendations
  • Accurate understanding of the core thesis
  • Zero API costs
  • Complete privacy

The feedback was genuinely useful. We're implementing several of the suggestions.

The future of AI isn't just cloud giants. It's also small models running locally, privately, instantly - giving you useful feedback on your work without sending a single byte off your machine.


Related Reading


Review conducted 2026-01-07 using Ollama llama3.2:1b running locally on macOS

Ready for your "Oh" moment?

Ready to accelerate your breakthrough? Send yourself an Un-Robocallβ„’ β€’ Get transcript when logged in

Send Strategic Nudge (30 seconds)