From AI Session to Permanent Commit History: The Complete Workflow
A session starts with drafts, moves through review, becomes a git commit, and gets indexed as a checkpoint. The entire flow is automatic and invisible. Your AI work transforms from session-only knowledge into permanent, searchable institutional memory.
The Complete Journey
When a developer uses an AI agent to generate code changes, the work doesn't immediately go into permanent history. Instead, there's a deliberate, human-controlled journey from temporary session to permanent record:
- AI Session Begins: Developer starts a coding session with an AI agent.
- Draft Commits Generated: As the agent works, it produces Draft Commits—temporary checkpoints within the session.
- Human Review: The developer reviews the work, can ask for iterations, or approve.
- Git Commit: Once satisfied, the developer runs
git commit, which moves code into the permanent git repository. - Post-Commit Hook Triggers: Bitloops automatically captures all activity into a Committed Checkpoint.
- Checkpoint Is Indexed: The checkpoint becomes searchable and queryable in the knowledge store.
- Future Access: Developers and AI agents can now access the full context for this change.
This workflow is deliberate. It ensures that permanent history only includes changes the developer has explicitly approved, and it preserves the full reasoning chain that led to those changes.
Stage 1: Starting an AI Session
A developer opens their code editor and starts an AI session. The mechanism depends on the tooling:
- IDE Integration: The developer might use a sidebar or command palette: "Start Bitloops Session" or similar.
- CLI Tool:
bitloops session startbegins a new session. - API Integration: Programmatic tools initialize a session context.
What happens:
- A session ID is created locally
- The current git state is captured (which commit, which branch, which files have changes)
- The knowledge store is queried to load prior context (prior commits, prior reasoning captures)
- The developer can now interact with an AI agent that's aware of the codebase and history
The session is isolated. Nothing committed to disk yet. It's a workspace for experimentation.
Example initialization:
$ bitloops session start
Session ID: sess_abc123xyz
Current branch: feature/auth-refresh
Current HEAD: abc123def... (refactor: extract JWT service)
Prior context loaded: 12 related commits from last 30 days
Ready for AI coding.Stage 2: AI Agent Works and Generates Draft Commits
The developer asks the AI agent to do something. Understanding human-AI collaboration models helps structure this request effectively.
"Refactor the authentication middleware to support multi-factor authentication. Add support for TOTP and SMS as factors. Ensure existing JWT flow still works unchanged. Target: under 500ms per auth request."
The AI agent then:
- Analyzes the codebase: Reads the auth middleware, JWT implementation, existing test structure.
- Reasons about the approach: "Multi-factor auth requires a state machine approach. Current flow is synchronous JWT validation. I need to add a challenge-response phase without breaking existing clients."
- Generates code incrementally: Creates the MFA service, modifies the middleware, writes tests.
- Generates a Draft Commit: After the first round of work, the agent produces a checkpoint:
Flow diagram
The developer sees this Draft Commit in the UI. They can:
- Review it: Look at the code, verify it makes sense.
- Request changes: "This doesn't handle SMS recovery codes. Please add that."
- Approve it: "Looks good, continue."
- Discard it: "Actually, let's try a different approach. Start over."
The agent might iterate. The developer asks for changes. Another Draft Commit is generated. This cycle repeats until the developer is satisfied.
Draft Commits live only in the session. They're not committed to git. They're not persisted to disk. If the session ends, they're gone. This is intentional—they're working snapshots, not permanent records.
Stage 3: Developer Reviews and Iterates
The developer is in control at this stage. They're the arbiter of quality.
Typical review flow:
Developer: "Add SMS as an MFA option"
Agent: [generates code, creates Draft Commit #2]
Developer: [reviews] "Good, but error handling is incomplete. What if SMS
delivery fails?"
Agent: [analyzes objection, modifies code, creates Draft Commit #3]
Developer: [reviews] "Much better. Now let's add metrics tracking."
Agent: [implements metrics, creates Draft Commit #4]
Developer: [reviews] "Perfect. This is ready."Each Draft Commit includes:
- The code changes
- The reasoning (what was asked, what was considered, why it was done this way)
- Metadata (which model, how long, how many tokens)
The developer can see all of this. They can ask questions that make sense only if they understand the reasoning:
- "You chose the service pattern over a middleware-only approach. Was that decision driven by testability or maintainability?"
- "I see you rejected message queuing. Was that because of complexity or because the throughput doesn't warrant it?"
These questions are possible because the reasoning is explicit, not buried in code.
Stage 4: Committing to Git
When the developer is satisfied with the work, they commit it to git using standard commands:
$ git add .
$ git commit -m "feat: add MFA with TOTP and SMS support
- Extract MFA logic into dedicated service
- Support TOTP (time-based one-time passwords)
- Support SMS delivery via Twilio
- Maintain backward compatibility with JWT-only flow
- Add rate limiting and account lockout after N failed attempts"This is a normal git commit. The developer is doing exactly what they'd do without AI assistance. No new commands. No Bitloops-specific workflow. Just standard git.
At this point:
- The code enters the permanent git repository
- The commit gets a hash (e.g.,
abc123def456...) - The commit is on the branch (e.g.,
feature/auth-refresh) - The session is complete (though it could continue with new work)
Stage 5: Post-Commit Hook Triggers Automatically
Here's where Bitloops takes over automatically. A git post-commit hook fires (installed once when Bitloops is set up):
$ git commit -m "feat: add MFA with TOTP and SMS support"
[feature/auth-refresh abc123def456] feat: add MFA with TOTP and SMS support
Author: Developer Name <dev@example.com>
Date: Wed Mar 5 14:32:18 2026 +0000
(Bitloops post-commit hook) Capturing Committed Checkpoint...
✓ Retrieved Draft Commits from session
✓ Bundled activity chain, reasoning, metadata
✓ Bound to commit hash abc123def456
✓ Indexed in SQLite + HNSW vector database
✓ Checkpoint committed to local knowledge storeThe hook does several things automatically:
- Retrieves the Draft Commits: Finds the Draft Commits from the session that correspond to this git commit.
- Constructs the Committed Checkpoint: Bundles the activity chain, reasoning, prompts, constraints, alternatives considered, symbols touched, model metadata, and more.
- Binds to Commit Hash: Cryptographically links the checkpoint to the git commit hash. This ensures immutability—the checkpoint can't be separated from the commit or altered without invalidating the hash.
- Stores in Knowledge Store: Saves the checkpoint to the local SQLite database and indexes it in the HNSW vector database.
- Validates: Ensures all required fields are present and the checkpoint is valid.
The entire process is transparent. The developer sees a brief confirmation message and continues working. No manual steps. No additional commands.
The Committed Checkpoint now exists and looks something like this:
Flow diagram
This checkpoint is now immutable. It's cryptographically bound to the git commit hash. If someone tries to modify it later, the hash will be invalid. This is powerful for auditing and compliance, especially important for audit trails.
Stage 6: Checkpoint Is Indexed
The Committed Checkpoint isn't just stored—it's indexed. Specifically:
- Structured Indexing: The checkpoint's metadata (model, timestamp, symbols, constraints) is indexed in SQLite. This enables fast lookups by date, model, file, or other structured criteria.
- Semantic Indexing: The checkpoint's reasoning text (prompts, constraints, decisions, explanations) is embedded as vectors and stored in the HNSW vector database. This enables natural language search.
Both indices are built automatically. The developer doesn't need to do anything.
Examples of what becomes queryable:
# Structured queries (SQLite)
$ bitloops query --model=claude-opus-4-6 --since=2026-02-01
(Shows all commits made with Claude Opus 4.6 since Feb 1)
$ bitloops query --file=middleware/auth.ts --last=10
(Shows the 10 most recent commits touching the auth middleware)
# Semantic queries (HNSW vector search)
$ bitloops query "How did we approach multi-factor authentication?"
(Returns commits related to MFA, ranked by relevance)
$ bitloops query "What were the trade-offs in caching decisions?"
(Returns commits that discuss caching trade-offs)Both structured and semantic queries are fast. The developer (or AI agent) can get context instantly.
Stage 7: Future Access and Building on Prior Decisions
Now the checkpoint is part of the permanent history. Future work builds on it.
Scenario A: Onboarding a New Team Member
A junior engineer joins the team. They need to understand the authentication system. Instead of reading code and asking senior developers, they can:
$ bitloops query "Show me the decisions behind multi-factor authentication"They get the full context: the problem statement, the constraints, the approaches considered, why MFA was chosen over alternatives, the trade-offs made. They inherit the institutional knowledge captured at the time the code was written.
Scenario B: Debugging an Issue
A production issue surfaces: SMS delivery is taking 800ms, exceeding the target of <500ms. A developer pulls up the reasoning:
Original Prompt: "...Target: under 500ms per auth request."
Constraints Applied: "Must complete auth in <500ms"
Decision Point: "SMS delivery: chose Twilio (tested, documented)"Now the developer understands: The 500ms constraint was explicit. The choice of Twilio was deliberate. The issue is real. The developer might:
- Optimize the Twilio integration (caching tokens, batching, etc.)
- Add async SMS delivery (send code, validate in background)
- Change the constraint (if business needs allow)
But they're making informed decisions because the original reasoning is explicit.
Scenario C: AI Agent Generating Related Code
A new task arrives: "Add email as an MFA option alongside TOTP and SMS."
An AI agent working on this task can query prior reasoning:
Prior Decision: "MFA implemented via service pattern for clean separation"
Constraints: "No schema changes, must stay <500ms"
Alternatives: "Considered external MFA service, rejected as overkill"The agent doesn't start from scratch. It understands the architectural approach, the constraints, the reasoning. It can generate code that's consistent with prior decisions. This builds institutional knowledge over time.
Stage 8: Using History to Guide Future Work
As checkpoints accumulate, patterns emerge. Over a year, a team might have 200+ commits, each with reasoning captures. This creates a learnable dataset.
Pattern Analysis
The team can ask: "What architectural patterns do we use most often?"
The data might show:
- Service pattern: 45 commits
- Repository pattern: 30 commits
- Middleware pattern: 25 commits
- Direct ORM usage: 8 commits
This tells them: "Service pattern is our default. Repository pattern is for complex queries. We avoid direct ORM in new code." This is implicit knowledge made explicit.
Constraint Evolution
The team can ask: "How have performance constraints evolved?"
The data might show:
- 2025: "Sub-second is acceptable"
- 2026 Q1: "Sub-200ms required"
- 2026 Q2: "Sub-100ms required for mobile"
This reveals: Performance is becoming more critical. Future AI agents should prioritize performance, not just correctness.
Decision Confidence
The team can ask: "Which decisions had low confidence?"
The data might show:
- "Caching strategy marked as low confidence" (3 commits)
- "Database indexing strategy marked as medium confidence" (5 commits)
- "API design marked as high confidence" (12 commits)
This tells them: Watch the caching and indexing strategies. They might need revisiting. The API design is solid.
What's Automatic vs. What Requires Human Action
It's important to be clear about what happens automatically and what requires the developer:
| Stage | Automatic? | Who's Involved | What Happens |
|---|---|---|---|
| Session Start | Partially | Developer | Developer initiates; system loads context |
| AI Generation | Automatic | AI Agent | Agent generates code, creates Draft Commits |
| Review & Iteration | Manual | Developer | Developer reviews, requests changes |
| Git Commit | Manual | Developer | Developer runs git commit (normal command) |
| Post-Commit Hook | Automatic | System | Bitloops captures checkpoint automatically |
| Indexing | Automatic | System | Checkpoint is indexed in SQLite + HNSW |
| Future Queries | Manual | Developer/AI | Someone queries the checkpoint for context |
The developer's only new responsibility is the review step, and that's actually a responsibility they should have anyway (don't commit code you haven't reviewed, even if it's AI-generated). Everything else is automatic.
Concrete End-to-End Example
Let's walk through a complete example from start to finish.
Day 1, 10:00 AM: Session Starts
Developer Mary starts her workday. She opens her IDE and starts a Bitloops session.
$ bitloops session start
Session ID: sess_mary_001
Current branch: main
Context loaded from 8 recent commits
Ready for AI coding.Day 1, 10:05 AM: Task Begins
Mary asks the AI agent:
"I need to add request logging to the API. Log all requests and responses, including headers, body (sanitized), and response time. Don't log sensitive data like passwords or tokens. Target: <5ms overhead per request."
Day 1, 10:15 AM: First Draft Commit
The agent has analyzed the codebase and generated code. It creates a Draft Commit:
- Added
RequestLoggermiddleware - Added
SanitizationRulesfor sensitive fields - Added tests
Mary reviews it:
Flow diagram
Mary approves.
Day 1, 10:25 AM: Iteration
Mary realizes: "We also need to log database queries, not just HTTP requests."
She asks the agent:
"Add database query logging. Log the query, bind parameters (sanitized), and execution time. Integrate it with the request logger so queries are tied to parent requests."
Day 1, 10:45 AM: Second Draft Commit
The agent generates more code. New Draft Commit:
- Added
QueryLoggerservice - Modified database client to integrate with logging
- Added configuration for which queries to log
- Updated tests
Mary reviews:
Flow diagram
Mary approves.
Day 1, 11:00 AM: Review is Complete
Mary decides the work is ready. She commits to git:
$ git add .
$ git commit -m "feat: add comprehensive request and query logging
- Add request logging middleware for HTTP traffic
- Add database query logger with sampling for high-traffic queries
- Sanitize sensitive fields (passwords, tokens, API keys)
- Log response times and execution times
- Integrate query logs with request ID for tracing
- Target: <5ms overhead per request"
[main abc789def012] feat: add comprehensive request and query logging
Author: Mary <mary@example.com>
Date: Wed Mar 5 11:00:00 2026 +0000
(Bitloops post-commit hook) Capturing Committed Checkpoint...
✓ Bundled 2 Draft Commits into activity chain
✓ Captured reasoning, constraints, alternatives
✓ Bound to commit hash abc789def012
✓ Indexed in knowledge store
✓ Committed Checkpoint ready for future accessDay 15, 2:00 PM: Bug Found
A developer (James) finds an issue: query logging is too aggressive and slows down the transaction endpoint.
He queries the checkpoint:
Flow diagram
James understands: The sampling approach was deliberate, chosen over async logging. But the current configuration might be logging too much. He checks:
$ grep -r "LOG_SAMPLE_RATE" config/
LOG_SAMPLE_RATE=0.1 # Log 10% of queriesHe realizes: Even 10% sampling on a high-traffic endpoint is too much. He lowers it to 1%:
LOG_SAMPLE_RATE=0.01 # Log 1% of queriesThe issue is resolved. Without the checkpoint, James would have had to reverse-engineer why sampling was chosen and whether it was the right decision. With the checkpoint, he understood the trade-offs and could make an informed adjustment.
Integration With Existing Workflows
A key point: Committed Checkpoints and Bitloops integration don't disrupt existing workflows. There are no new commands for the developer.
Before Bitloops:
$ git add .
$ git commit -m "Add logging"
$ git pushWith Bitloops:
$ git add .
$ git commit -m "Add logging"
(Bitloops post-commit hook runs automatically)
$ git pushThe only difference is the post-commit hook. Everything else is identical. Developers don't need training. The workflow is immediately familiar.
In the Background:
While the developer continues working, Bitloops is:
- Capturing the checkpoint
- Indexing it in SQLite and HNSW
- Making it queryable for future developers and AI agents
All invisible. All automatic.
An AI-Native Perspective
From an AI agent's perspective, this workflow creates a knowledge base. An agent working on a new task can:
- Read the current code (like any code agent does)
- Query the checkpoint history for prior reasoning
- Understand not just what was built, but why
- Generate code consistent with prior decisions
Over time, as checkpoints accumulate, agents get smarter within a codebase. They stop reinventing approaches. They build on institutional knowledge. This is possible because the reasoning—the why—is preserved alongside the code. Tools like Bitloops make this automatic, not an afterthought.
FAQ
What happens to Draft Commits if the developer doesn't commit?
They're discarded. Draft Commits live in the session. If the developer closes the session without committing, the Draft Commits are lost. This is intentional—they're working snapshots, not intended to be permanent.
Can I recover a session if my computer crashes?
Depends on implementation. If Bitloops persists session state (Draft Commits) to disk, you might be able to recover. But the guarantee isn't strong. This is why committing to git regularly is important—it's the permanent record.
Can I reorder Draft Commits if they're out of order?
The Committed Checkpoint captures the order in which Draft Commits were created. Reordering would break causality (Commit 2 might have depended on decisions in Commit 1). Best practice: let the agent work sequentially and commit once satisfied. Reordering is a sign of conceptual confusion about the work.
What if I merge multiple AI tasks into one git commit?
The Committed Checkpoint can handle this. It bundles multiple Draft Commits into a single activity chain. The reasoning preserves the sequence: "First, the agent did X. Then, the agent did Y. Finally, the agent did Z." The final git commit is still one commit with one hash, but the reasoning captures the multi-step journey.
Can other team members see Committed Checkpoints?
By default, checkpoints are in your local knowledge store. If you want to share them (for code review, onboarding, auditing), you'd export them or replicate them. This is a permissions design choice—treat checkpoints like source code.
How do I delete a Committed Checkpoint?
You can't. That's the point of immutability. If a checkpoint is wrong or contains sensitive information, you can:
- Create a new commit that corrects/redacts the information.
- That new commit gets a new checkpoint.
- The old checkpoint remains as historical record (it's still immutable, but superseded).
This is like git history—you can't rewrite the past, but you can move forward from it.
What if I use multiple AI services (Claude, GPT, Gemini)?
Each Committed Checkpoint includes the model metadata. The checkpoint for Claude code is tied to Claude. The checkpoint for GPT code is tied to GPT. You have a multi-model history. This is useful because it lets you analyze: "Do certain models make different decision patterns?"
How large can a Committed Checkpoint be?
The checkpoint is metadata-heavy, not code-heavy. The code is stored in git. The checkpoint stores reasoning, prompts, constraints, decision points—typically a few KB to a few hundred KB per commit. Even a large codebase with 500 commits might have 500 MB of checkpoints, very manageable.
Primary Sources
- Guide to Git workflows, hooks, and integration points for automating commit recording. Pro Git Book
- Comprehensive reference on software engineering practices and development workflows. Software Engineering Practitioner Approach
- Hierarchical algorithm for efficient semantic search over embedded commit history. HNSW
- Large-scale similarity search library for indexing session and commit embeddings. FAISS
- Lightweight database for storing session transcripts and commit metadata persistently. SQLite
- Vector database for querying and retrieving similar AI decisions from history. Qdrant
More in this hub
From AI Session to Permanent Commit History: The Complete Workflow
6 / 12Previous
Article 5
Capturing Reasoning Behind AI Code Changes: The Real Differentiator
Next
Article 7
Structural Memory vs Semantic Memory: Two Kinds of Code Context
Also in this hub
Get Started with Bitloops.
Apply what you learn in these hubs to real AI-assisted delivery workflows with shared context, traceable reasoning, and architecture-aware engineering practices.
curl -sSL https://bitloops.com/install.sh | bash