We Asked a Local AI to Review Our Book. Running on a MacBook. No Cloud. No API Costs.
Published on: January 7, 2026
What happens when you ask a 1.3GB language model running entirely on your laptop to review a 7,765-line physics book?
We found out.
The Setup: Ollama running llama3.2:1b locally on a MacBook. No internet required. No API costs. No data leaving the machine. Complete privacy.
The model is small but mighty. And its feedback was surprisingly useful.
π¦ A β B π§
Step 1: Check Available Models
ollama list
Output:
- NAME: llama3.2:1b
- SIZE: 1.3 GB
- MODIFIED: 4 days ago
Step 2: Extract Book Sections
The full book is 7,765 lines. Too large for the model's context window. We chunked it:
# Extract preface (first 240 lines)
head -240 public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-preface.md
# Extract Introduction Section 1 (lines 255-909)
sed -n '255,909p' public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-intro1.md
# Extract S=P=H chapter (lines 3000-3500)
sed -n '3000,3500p' public/book/tesseract-physics-2025-12-21-as-reviewed.md > /tmp/book-section3.md
Step 3: Send to Ollama with Review Prompts
π¦π§ B β C π
Preface Review Prompt:
cat /tmp/book-preface.md | ollama run llama3.2:1b \
"You are a book reviewer. Please review this preface from
Tesseract Physics - Fire Together, Ground Together.
Evaluate: 1) Hook effectiveness, 2) Clarity of thesis,
3) Writing quality, 4) Areas for improvement.
Be specific and constructive."
Introduction Review Prompt:
head -200 /tmp/book-intro1.md | ollama run llama3.2:1b \
"Review this academic text about database normalization
and AI alignment. What are the main claims?
Rate the writing 1-10. List 3 improvements needed."
S=P=H Chapter Prompt:
head -150 /tmp/book-section3.md | ollama run llama3.2:1b \
"Summarize the main ideas in this text about physics
and information theory. What is the author arguing?"
Technical Chapter Prompt:
head -200 /tmp/book-section4.md | ollama run llama3.2:1b \
"Review this technical chapter. Rate 1-10: clarity,
originality, practical value. What makes this
compelling or not?"
π¦π§π C β D β
Preface Scores:
- Hook Effectiveness: 7/10
- Clarity of Thesis: 9/10
- Writing Quality: 8/10
- Overall: 8/10
Introduction Scores:
- Writing Style: 8/10
- Clarity and Concision: 7/10
- Relevance and Timeliness: 9/10
- Data and Evidence: 6/10
Technical Chapter Scores:
- Clarity: 9/10
- Originality: 8.5/10
- Practical Value: 9/10
- Overall: 8/10
Average Score: 8/10 - The local AI found the book engaging, clear, and timely. Main critique: needs more concrete data citations.
π¦π§πβ D β E πͺ
Preface Strengths:
- The metaphor "We Killed God" effectively grabs attention
- Thesis is clearly stated and easy to follow
- Engaging and conversational tone with touches of humor
- "We didn't kill Codd out of malice" works well
Introduction Strengths:
- Engaging narrative with conversational tone
- Clear structure with logical flow
- Emotional resonance - conveys developer frustration effectively
- Topic is highly relevant and timely
Technical Chapter Strengths:
- Clear, concise, well-defined concepts
- Innovative approach combining neuroscience, psychology, and AI
- Examples and analogies help illustrate complex concepts
- Valuable perspective on brain function and information retrieval
π¦π§πβπͺ E β F π¨
Preface Issues:
- Hook is somewhat simplistic - could provide more depth
- Text jumps abruptly between topics
- Some sentences feel forced or wordy
- Ends abruptly without summarizing main points
Introduction Issues:
- Assumes familiarity with technical terms (C3 alignment, referential constraints)
- Relies heavily on anecdotal examples
- Numbers like "35M fine" lack supporting context
- Doesn't adequately discuss alternative approaches
Technical Chapter Issues:
- Lacks concrete data or research to support claims
- Relies heavily on theoretical frameworks
- Some specific measurements need citation
- Examples may not apply to all contexts
π¦π§πβπͺπ¨ F β G π
High Priority:
- Add technical definitions for C3 alignment, referential constraints, S=P=H upfront
- Cite concrete data sources for the 35M fine and 0.3% drift claims
- Add transitional phrases between sections for smoother flow
Medium Priority:
- Address common counterarguments ("What about data quality?" "NoSQL alternatives?")
- Break long sentences into shorter, clearer ones
- End preface with clear summary and call-to-action
Lower Priority:
- Balance stories with research citations
- Tone down emotive language where appropriate
- Provide more specific measurement methodology
π¦π§πβπͺπ¨π G β H π§
The most important validation: did a small local model understand what the book is actually about?
Ollama's Summary of S=P=H:
"The author is arguing that the Unity Principle (S=P=H) implies that semantic proximity, or concepts that belong together, should be physically aligned with hardware optimization, or cache alignment. By using a short-ranked matrix to store relevant information together, the author claims it is possible to reduce the memory access time and improve overall system performance."
That's exactly right. A 1.3GB model running locally grasped the central argument:
- Semantic meaning should match physical storage
- Position encodes meaning
- This reduces memory access time
- Cache alignment follows semantic alignment
π¦π§πβπͺπ¨ππ§ H β I π
Privacy:
- No book content sent to cloud APIs
- No risk of training data leakage
- Complete control over intellectual property
Cost:
- Zero API costs
- Unlimited reviews
- No token counting anxiety
Speed:
- Each review took 10-30 seconds
- No rate limiting
- No API queue waiting
Reproducibility:
- Same model, same prompts, same results
- Version controlled
- Shareable methodology
π¦π§πβπͺπ¨ππ§ π I β J π
Install Ollama:
# macOS
brew install ollama
# Or download from ollama.ai
Pull the Model:
ollama pull llama3.2:1b
Review Your Own Content:
cat your-document.md | ollama run llama3.2:1b \
"Review this text. Rate 1-10: clarity, originality,
practical value. List 3 specific improvements."
For Larger Documents:
# Chunk it first
head -200 your-document.md | ollama run llama3.2:1b "Your prompt here"
π¦π§πβπͺπ¨ππ§ ππ J β K π―
A 1.3GB model running entirely on a laptop gave us:
- 8/10 overall score
- 9 specific improvement recommendations
- Accurate understanding of the core thesis
- Zero API costs
- Complete privacy
The feedback was genuinely useful. We're implementing several of the suggestions.
The Meta-Lesson: If a small local model can understand S=P=H well enough to summarize it accurately, the writing is clear. If it struggles, the writing needs work. Local AI as a clarity check.
The future of AI isn't just cloud giants. It's also small models running locally, privately, instantly - giving you useful feedback on your work without sending a single byte off your machine.
Related Reading
- Substrate Relativity: Why Your AI Lies and Your Gut Doesn't β Why even small models drift (k_E = 0.003 applies to all systems)
- k_E = 0.003: Five Independent Proofs β The drift constant that governs even 1.3GB models
- Harari Says You're a Hackable Animal β How grounding creates resistance to manipulation
- Temporal Grounding: Time Γ Time = Space β Why running locally (minimizing latency) reduces semantic drift
- Unity Principle (Chapter 1) β The S=P=H foundation
Review conducted 2026-01-07 using Ollama llama3.2:1b running locally on macOS
Ready for your "Oh" moment?
Ready to accelerate your breakthrough? Send yourself an Un-Robocallβ’ β’ Get transcript when logged in
Send Strategic Nudge (30 seconds)