Philosophy
"Code is the Artifact. Context is the Source."
The Problem
Traditional version control captures what changed (code diffs) but loses why it changed (reasoning).
When you look at a commit from 6 months ago:
- ❌ Why was this approach chosen?
- ❌ What alternatives were considered?
- ❌ What was the user actually trying to achieve?
The code is there, but the context is lost.
The AI Opportunity
AI coding assistants create a unique opportunity. They:
- Receive explicit user intents ("Fix the auth bug")
- Reason about approaches ("I'll add a try/catch")
- Make decisions with rationale
This reasoning already exists - it just gets discarded after the session ends.
AGIT's Solution
AGIT captures this reasoning as it happens:
Traditional Git:
commit → code diff → done
AGIT:
intent → reasoning → decision → code → neural commit
↓ ↓ ↓
[stored] [stored] [stored]
The Dual Graph
AGIT creates a "Neural Graph" parallel to Git's commit graph:
| Git Graph | Neural Graph |
|---|---|
| Code changes | Reasoning context |
| What changed | Why it changed |
| Diffs | Traces |
| Merge history | Decision history |
Both graphs are linked - every neural commit references a git commit.
Key Principles
1. Capture at the Source
Context is captured when it's created, not reconstructed later:
- User intents are logged when expressed
- AI reasoning is logged when decided
- Decisions are logged when made
2. AI as Source, Not Consumer
AGIT inverts the typical AI tool pattern:
Traditional: CLI tool → calls → LLM
AGIT: AI Editor → pushes to → AGIT
Your AI editor is the source of context, not the consumer.
3. Deterministic Processing
AGIT doesn't use LLMs internally:
- The AI editor does the thinking
- AGIT stores and synthesizes
- Results are reproducible
4. Non-Invasive
AGIT works alongside existing tools:
- Separate from
.git/ - Doesn't modify git workflow
- Optional - use when valuable
Use Cases
Code Review
Understand intent before reviewing code:
agit show <commit-from-pr>
# See: Intent was X, approach was Y, considered Z
Debugging
Find out why a bug exists:
agit log
agit show <when-bug-introduced>
# See reasoning that led to the issue
Onboarding
New team members understand decisions:
agit show <important-feature>
# See the full reasoning trace
AI Training
Build datasets from real development sessions:
- Actual user intents
- Real reasoning patterns
- Connected to code outcomes
The Future
As AI becomes more integrated into development:
- Context preservation becomes more valuable
- Reasoning traces become team knowledge
- The "why" becomes as important as the "what"
AGIT is infrastructure for this AI-native future.
See Also
- Architecture - How it's built
- Quick Start - Try it yourself