Skip to content
Bitloops - Git captures what changed. Bitloops captures why.
HomeAbout usDocsBlog
ResourcesAI Memory & Reasoning CaptureFrom AI Session to Permanent Commit History: The Complete Workflow

From AI Session to Permanent Commit History: The Complete Workflow

A session starts with drafts, moves through review, becomes a git commit, and gets indexed as a checkpoint. The entire flow is automatic and invisible. Your AI work transforms from session-only knowledge into permanent, searchable institutional memory.

18 min readUpdated March 4, 2026AI Memory & Reasoning Capture

The Complete Journey

When a developer uses an AI agent to generate code changes, the work doesn't immediately go into permanent history. Instead, there's a deliberate, human-controlled journey from temporary session to permanent record:

  1. AI Session Begins: Developer starts a coding session with an AI agent.
  2. Draft Commits Generated: As the agent works, it produces Draft Commits—temporary checkpoints within the session.
  3. Human Review: The developer reviews the work, can ask for iterations, or approve.
  4. Git Commit: Once satisfied, the developer runs git commit, which moves code into the permanent git repository.
  5. Post-Commit Hook Triggers: Bitloops automatically captures all activity into a Committed Checkpoint.
  6. Checkpoint Is Indexed: The checkpoint becomes searchable and queryable in the knowledge store.
  7. Future Access: Developers and AI agents can now access the full context for this change.

This workflow is deliberate. It ensures that permanent history only includes changes the developer has explicitly approved, and it preserves the full reasoning chain that led to those changes.

Stage 1: Starting an AI Session

A developer opens their code editor and starts an AI session. The mechanism depends on the tooling:

  • IDE Integration: The developer might use a sidebar or command palette: "Start Bitloops Session" or similar.
  • CLI Tool: bitloops session start begins a new session.
  • API Integration: Programmatic tools initialize a session context.

What happens:

  • A session ID is created locally
  • The current git state is captured (which commit, which branch, which files have changes)
  • The knowledge store is queried to load prior context (prior commits, prior reasoning captures)
  • The developer can now interact with an AI agent that's aware of the codebase and history

The session is isolated. Nothing committed to disk yet. It's a workspace for experimentation.

Example initialization:

$ bitloops session start
Session ID: sess_abc123xyz
Current branch: feature/auth-refresh
Current HEAD: abc123def... (refactor: extract JWT service)
Prior context loaded: 12 related commits from last 30 days
Ready for AI coding.
Bash

Stage 2: AI Agent Works and Generates Draft Commits

The developer asks the AI agent to do something. Understanding human-AI collaboration models helps structure this request effectively.

"Refactor the authentication middleware to support multi-factor authentication. Add support for TOTP and SMS as factors. Ensure existing JWT flow still works unchanged. Target: under 500ms per auth request."

The AI agent then:

  1. Analyzes the codebase: Reads the auth middleware, JWT implementation, existing test structure.
  2. Reasons about the approach: "Multi-factor auth requires a state machine approach. Current flow is synchronous JWT validation. I need to add a challenge-response phase without breaking existing clients."
  3. Generates code incrementally: Creates the MFA service, modifies the middleware, writes tests.
  4. Generates a Draft Commit: After the first round of work, the agent produces a checkpoint:

Flow diagram

Draft Commit #1
Title: "feat: add MFA service with TOTP support"
Changed Files:
services/mfa.ts (new)
middleware/auth.ts (modified)
tests/auth.test.ts (modified)
Reasoning:
Prompt: "Add MFA with TOTP..."
Approach: "Service pattern to separate MFA logic from middleware"
Constraints: "No schema changes, backward compatible JWT"
Confidence: High (straightforward service extraction)
Symbols Touched:
MFAService (new)
AuthMiddleware (modified)
(9 other locations)
Code Diff: (stored temporarily in session)

The developer sees this Draft Commit in the UI. They can:

  • Review it: Look at the code, verify it makes sense.
  • Request changes: "This doesn't handle SMS recovery codes. Please add that."
  • Approve it: "Looks good, continue."
  • Discard it: "Actually, let's try a different approach. Start over."

The agent might iterate. The developer asks for changes. Another Draft Commit is generated. This cycle repeats until the developer is satisfied.

Draft Commits live only in the session. They're not committed to git. They're not persisted to disk. If the session ends, they're gone. This is intentional—they're working snapshots, not permanent records.

Stage 3: Developer Reviews and Iterates

The developer is in control at this stage. They're the arbiter of quality.

Typical review flow:

Developer: "Add SMS as an MFA option"
Agent:     [generates code, creates Draft Commit #2]
Developer: [reviews] "Good, but error handling is incomplete. What if SMS
           delivery fails?"
Agent:     [analyzes objection, modifies code, creates Draft Commit #3]
Developer: [reviews] "Much better. Now let's add metrics tracking."
Agent:     [implements metrics, creates Draft Commit #4]
Developer: [reviews] "Perfect. This is ready."
YAML

Each Draft Commit includes:

  • The code changes
  • The reasoning (what was asked, what was considered, why it was done this way)
  • Metadata (which model, how long, how many tokens)

The developer can see all of this. They can ask questions that make sense only if they understand the reasoning:

  • "You chose the service pattern over a middleware-only approach. Was that decision driven by testability or maintainability?"
  • "I see you rejected message queuing. Was that because of complexity or because the throughput doesn't warrant it?"

These questions are possible because the reasoning is explicit, not buried in code.

Stage 4: Committing to Git

When the developer is satisfied with the work, they commit it to git using standard commands:

$ git add .
$ git commit -m "feat: add MFA with TOTP and SMS support

- Extract MFA logic into dedicated service
- Support TOTP (time-based one-time passwords)
- Support SMS delivery via Twilio
- Maintain backward compatibility with JWT-only flow
- Add rate limiting and account lockout after N failed attempts"
Bash

This is a normal git commit. The developer is doing exactly what they'd do without AI assistance. No new commands. No Bitloops-specific workflow. Just standard git.

At this point:

  • The code enters the permanent git repository
  • The commit gets a hash (e.g., abc123def456...)
  • The commit is on the branch (e.g., feature/auth-refresh)
  • The session is complete (though it could continue with new work)

Stage 5: Post-Commit Hook Triggers Automatically

Here's where Bitloops takes over automatically. A git post-commit hook fires (installed once when Bitloops is set up):

$ git commit -m "feat: add MFA with TOTP and SMS support"
[feature/auth-refresh abc123def456] feat: add MFA with TOTP and SMS support
 Author: Developer Name <dev@example.com>
 Date:   Wed Mar 5 14:32:18 2026 +0000

 (Bitloops post-commit hook) Capturing Committed Checkpoint...
 ✓ Retrieved Draft Commits from sessionBundled activity chain, reasoning, metadataBound to commit hash abc123def456Indexed in SQLite + HNSW vector databaseCheckpoint committed to local knowledge store
Bash

The hook does several things automatically:

  1. Retrieves the Draft Commits: Finds the Draft Commits from the session that correspond to this git commit.
  2. Constructs the Committed Checkpoint: Bundles the activity chain, reasoning, prompts, constraints, alternatives considered, symbols touched, model metadata, and more.
  3. Binds to Commit Hash: Cryptographically links the checkpoint to the git commit hash. This ensures immutability—the checkpoint can't be separated from the commit or altered without invalidating the hash.
  4. Stores in Knowledge Store: Saves the checkpoint to the local SQLite database and indexes it in the HNSW vector database.
  5. Validates: Ensures all required fields are present and the checkpoint is valid.

The entire process is transparent. The developer sees a brief confirmation message and continues working. No manual steps. No additional commands.

The Committed Checkpoint now exists and looks something like this:

Flow diagram

Committed Checkpoint: chp_mfa_2026_03_05
Git Commit Hash: abc123def456...
Timestamp: 2026-03-05T14:32:18Z
Developer: dev@example.com
Branch: feature/auth-refresh
Activity Chain:
Draft Commit #1: "Add MFA service with TOTP"
└── Status: Approved
Draft Commit #2: "Add SMS support"
└── Status: Approved
Draft Commit #3: "Add error handling and recovery"
└── Status: Approved
Draft Commit #4: "Add metrics and monitoring"
Status: Approved (Final)
Reasoning Capture:
Original Prompt: "Refactor the authentication middleware to support..."
Problem Statement: "Add MFA while maintaining JWT compatibility"
Constraints Applied:
├── No schema changes
├── Backward compatible with existing JWT flow
├── Must complete auth in <500ms
└── Must support SMS and TOTP factors
Approaches Considered:
├── Approach A: Middleware-only state machine (rejected: coupling)
├── Approach B: Service + middleware (chosen: clean separation)
└── Approach C: External MFA service (rejected: overkill, cost)
Decision Points:
├── SMS Provider: Chose Twilio (tested, documented)
├── Recovery: Chose recovery codes (stateless, user-friendly)
└── Rate Limiting: Chose 5 attempts + 15min lockout
Confidence: High
Symbols Touched:
New: MFAService, TOTPValidator, SMSDelivery
Modified: AuthMiddleware, JWTValidator
Dependencies: 7 files, 23 locations
Impact Analysis: Auth flow, user registration, logout
Model Metadata:
Model: claude-opus-4-6
Total Tokens: 18500
Reasoning Tokens: 4200
Temperature: 0.3
Session Duration: 47 minutes
Code Changes:
Lines Added: 412
Lines Deleted: 23
Files Changed: 8
Tests Added: 34

This checkpoint is now immutable. It's cryptographically bound to the git commit hash. If someone tries to modify it later, the hash will be invalid. This is powerful for auditing and compliance, especially important for audit trails.

Stage 6: Checkpoint Is Indexed

The Committed Checkpoint isn't just stored—it's indexed. Specifically:

  1. Structured Indexing: The checkpoint's metadata (model, timestamp, symbols, constraints) is indexed in SQLite. This enables fast lookups by date, model, file, or other structured criteria.
  2. Semantic Indexing: The checkpoint's reasoning text (prompts, constraints, decisions, explanations) is embedded as vectors and stored in the HNSW vector database. This enables natural language search.

Both indices are built automatically. The developer doesn't need to do anything.

Examples of what becomes queryable:

# Structured queries (SQLite)
$ bitloops query --model=claude-opus-4-6 --since=2026-02-01
(Shows all commits made with Claude Opus 4.6 since Feb 1)

$ bitloops query --file=middleware/auth.ts --last=10
(Shows the 10 most recent commits touching the auth middleware)

# Semantic queries (HNSW vector search)
$ bitloops query "How did we approach multi-factor authentication?"
(Returns commits related to MFA, ranked by relevance)

$ bitloops query "What were the trade-offs in caching decisions?"
(Returns commits that discuss caching trade-offs)
Bash

Both structured and semantic queries are fast. The developer (or AI agent) can get context instantly.

Stage 7: Future Access and Building on Prior Decisions

Now the checkpoint is part of the permanent history. Future work builds on it.

Scenario A: Onboarding a New Team Member

A junior engineer joins the team. They need to understand the authentication system. Instead of reading code and asking senior developers, they can:

$ bitloops query "Show me the decisions behind multi-factor authentication"
Bash

They get the full context: the problem statement, the constraints, the approaches considered, why MFA was chosen over alternatives, the trade-offs made. They inherit the institutional knowledge captured at the time the code was written.

Scenario B: Debugging an Issue

A production issue surfaces: SMS delivery is taking 800ms, exceeding the target of <500ms. A developer pulls up the reasoning:

Original Prompt: "...Target: under 500ms per auth request."
Constraints Applied: "Must complete auth in <500ms"
Decision Point: "SMS delivery: chose Twilio (tested, documented)"
Text

Now the developer understands: The 500ms constraint was explicit. The choice of Twilio was deliberate. The issue is real. The developer might:

  • Optimize the Twilio integration (caching tokens, batching, etc.)
  • Add async SMS delivery (send code, validate in background)
  • Change the constraint (if business needs allow)

But they're making informed decisions because the original reasoning is explicit.

A new task arrives: "Add email as an MFA option alongside TOTP and SMS."

An AI agent working on this task can query prior reasoning:

Prior Decision: "MFA implemented via service pattern for clean separation"
Constraints: "No schema changes, must stay <500ms"
Alternatives: "Considered external MFA service, rejected as overkill"
Text

The agent doesn't start from scratch. It understands the architectural approach, the constraints, the reasoning. It can generate code that's consistent with prior decisions. This builds institutional knowledge over time.

Stage 8: Using History to Guide Future Work

As checkpoints accumulate, patterns emerge. Over a year, a team might have 200+ commits, each with reasoning captures. This creates a learnable dataset.

Pattern Analysis

The team can ask: "What architectural patterns do we use most often?"

The data might show:

  • Service pattern: 45 commits
  • Repository pattern: 30 commits
  • Middleware pattern: 25 commits
  • Direct ORM usage: 8 commits

This tells them: "Service pattern is our default. Repository pattern is for complex queries. We avoid direct ORM in new code." This is implicit knowledge made explicit.

Constraint Evolution

The team can ask: "How have performance constraints evolved?"

The data might show:

  • 2025: "Sub-second is acceptable"
  • 2026 Q1: "Sub-200ms required"
  • 2026 Q2: "Sub-100ms required for mobile"

This reveals: Performance is becoming more critical. Future AI agents should prioritize performance, not just correctness.

Decision Confidence

The team can ask: "Which decisions had low confidence?"

The data might show:

  • "Caching strategy marked as low confidence" (3 commits)
  • "Database indexing strategy marked as medium confidence" (5 commits)
  • "API design marked as high confidence" (12 commits)

This tells them: Watch the caching and indexing strategies. They might need revisiting. The API design is solid.

What's Automatic vs. What Requires Human Action

It's important to be clear about what happens automatically and what requires the developer:

StageAutomatic?Who's InvolvedWhat Happens
Session StartPartiallyDeveloperDeveloper initiates; system loads context
AI GenerationAutomaticAI AgentAgent generates code, creates Draft Commits
Review & IterationManualDeveloperDeveloper reviews, requests changes
Git CommitManualDeveloperDeveloper runs git commit (normal command)
Post-Commit HookAutomaticSystemBitloops captures checkpoint automatically
IndexingAutomaticSystemCheckpoint is indexed in SQLite + HNSW
Future QueriesManualDeveloper/AISomeone queries the checkpoint for context

The developer's only new responsibility is the review step, and that's actually a responsibility they should have anyway (don't commit code you haven't reviewed, even if it's AI-generated). Everything else is automatic.

Concrete End-to-End Example

Let's walk through a complete example from start to finish.

Day 1, 10:00 AM: Session Starts

Developer Mary starts her workday. She opens her IDE and starts a Bitloops session.

$ bitloops session start
Session ID: sess_mary_001
Current branch: main
Context loaded from 8 recent commits
Ready for AI coding.
Bash

Day 1, 10:05 AM: Task Begins

Mary asks the AI agent:

"I need to add request logging to the API. Log all requests and responses, including headers, body (sanitized), and response time. Don't log sensitive data like passwords or tokens. Target: <5ms overhead per request."

Day 1, 10:15 AM: First Draft Commit

The agent has analyzed the codebase and generated code. It creates a Draft Commit:

  • Added RequestLogger middleware
  • Added SanitizationRules for sensitive fields
  • Added tests

Mary reviews it:

Flow diagram

Draft Commit #1: "feat: add request logging middleware"
Code looks good
Tests are comprehensive
Reasoning included: "Middleware pattern chosen for
clean separation from business logic"

Mary approves.

Day 1, 10:25 AM: Iteration

Mary realizes: "We also need to log database queries, not just HTTP requests."

She asks the agent:

"Add database query logging. Log the query, bind parameters (sanitized), and execution time. Integrate it with the request logger so queries are tied to parent requests."

Day 1, 10:45 AM: Second Draft Commit

The agent generates more code. New Draft Commit:

  • Added QueryLogger service
  • Modified database client to integrate with logging
  • Added configuration for which queries to log
  • Updated tests

Mary reviews:

Flow diagram

Draft Commit #2: "feat: add database query logging"
Good integration with request logger
Thoughtful filtering of queries (avoiding logs for every SELECT)
Reasoning shows the agent considered: "Query logging overhead.
Chose sampling for high-traffic queries."

Mary approves.

Day 1, 11:00 AM: Review is Complete

Mary decides the work is ready. She commits to git:

$ git add .
$ git commit -m "feat: add comprehensive request and query logging

- Add request logging middleware for HTTP traffic
- Add database query logger with sampling for high-traffic queries
- Sanitize sensitive fields (passwords, tokens, API keys)
- Log response times and execution times
- Integrate query logs with request ID for tracing
- Target: <5ms overhead per request"

[main abc789def012] feat: add comprehensive request and query logging
 Author: Mary <mary@example.com>
 Date:   Wed Mar 5 11:00:00 2026 +0000

 (Bitloops post-commit hook) Capturing Committed Checkpoint...
 ✓ Bundled 2 Draft Commits into activity chainCaptured reasoning, constraints, alternativesBound to commit hash abc789def012Indexed in knowledge storeCommitted Checkpoint ready for future access
Bash

Day 15, 2:00 PM: Bug Found

A developer (James) finds an issue: query logging is too aggressive and slows down the transaction endpoint.

He queries the checkpoint:

Flow diagram

$ bitloops query "Show me the reasoning behind query logging decisions"
Result: Committed Checkpoint from abc789def012
Reasoning:
Problem: Need logging without excessive overhead
Constraint: <5ms overhead per request
Approaches Considered:
Log all queries (rejected: too slow)
Log sampled queries (chosen: balanced)
Asynchronous logging (rejected: complexity)
Decision Point: "Chose sampling for high-traffic queries"
Alternative Rejected: "Asynchronous logging rejected because:
adds complexity and doesn't guarantee ordering"

James understands: The sampling approach was deliberate, chosen over async logging. But the current configuration might be logging too much. He checks:

$ grep -r "LOG_SAMPLE_RATE" config/
LOG_SAMPLE_RATE=0.1  # Log 10% of queries
Bash

He realizes: Even 10% sampling on a high-traffic endpoint is too much. He lowers it to 1%:

LOG_SAMPLE_RATE=0.01  # Log 1% of queries
Text

The issue is resolved. Without the checkpoint, James would have had to reverse-engineer why sampling was chosen and whether it was the right decision. With the checkpoint, he understood the trade-offs and could make an informed adjustment.

Integration With Existing Workflows

A key point: Committed Checkpoints and Bitloops integration don't disrupt existing workflows. There are no new commands for the developer.

Before Bitloops:

$ git add .
$ git commit -m "Add logging"
$ git push
Bash

With Bitloops:

$ git add .
$ git commit -m "Add logging"
(Bitloops post-commit hook runs automatically)
$ git push
Bash

The only difference is the post-commit hook. Everything else is identical. Developers don't need training. The workflow is immediately familiar.

In the Background:

While the developer continues working, Bitloops is:

  • Capturing the checkpoint
  • Indexing it in SQLite and HNSW
  • Making it queryable for future developers and AI agents

All invisible. All automatic.

An AI-Native Perspective

From an AI agent's perspective, this workflow creates a knowledge base. An agent working on a new task can:

  1. Read the current code (like any code agent does)
  2. Query the checkpoint history for prior reasoning
  3. Understand not just what was built, but why
  4. Generate code consistent with prior decisions

Over time, as checkpoints accumulate, agents get smarter within a codebase. They stop reinventing approaches. They build on institutional knowledge. This is possible because the reasoning—the why—is preserved alongside the code. Tools like Bitloops make this automatic, not an afterthought.

FAQ

What happens to Draft Commits if the developer doesn't commit?

They're discarded. Draft Commits live in the session. If the developer closes the session without committing, the Draft Commits are lost. This is intentional—they're working snapshots, not intended to be permanent.

Can I recover a session if my computer crashes?

Depends on implementation. If Bitloops persists session state (Draft Commits) to disk, you might be able to recover. But the guarantee isn't strong. This is why committing to git regularly is important—it's the permanent record.

Can I reorder Draft Commits if they're out of order?

The Committed Checkpoint captures the order in which Draft Commits were created. Reordering would break causality (Commit 2 might have depended on decisions in Commit 1). Best practice: let the agent work sequentially and commit once satisfied. Reordering is a sign of conceptual confusion about the work.

What if I merge multiple AI tasks into one git commit?

The Committed Checkpoint can handle this. It bundles multiple Draft Commits into a single activity chain. The reasoning preserves the sequence: "First, the agent did X. Then, the agent did Y. Finally, the agent did Z." The final git commit is still one commit with one hash, but the reasoning captures the multi-step journey.

Can other team members see Committed Checkpoints?

By default, checkpoints are in your local knowledge store. If you want to share them (for code review, onboarding, auditing), you'd export them or replicate them. This is a permissions design choice—treat checkpoints like source code.

How do I delete a Committed Checkpoint?

You can't. That's the point of immutability. If a checkpoint is wrong or contains sensitive information, you can:

  1. Create a new commit that corrects/redacts the information.
  2. That new commit gets a new checkpoint.
  3. The old checkpoint remains as historical record (it's still immutable, but superseded).

This is like git history—you can't rewrite the past, but you can move forward from it.

What if I use multiple AI services (Claude, GPT, Gemini)?

Each Committed Checkpoint includes the model metadata. The checkpoint for Claude code is tied to Claude. The checkpoint for GPT code is tied to GPT. You have a multi-model history. This is useful because it lets you analyze: "Do certain models make different decision patterns?"

How large can a Committed Checkpoint be?

The checkpoint is metadata-heavy, not code-heavy. The code is stored in git. The checkpoint stores reasoning, prompts, constraints, decision points—typically a few KB to a few hundred KB per commit. Even a large codebase with 500 commits might have 500 MB of checkpoints, very manageable.

Primary Sources

  • Guide to Git workflows, hooks, and integration points for automating commit recording. Pro Git Book
  • Comprehensive reference on software engineering practices and development workflows. Software Engineering Practitioner Approach
  • Hierarchical algorithm for efficient semantic search over embedded commit history. HNSW
  • Large-scale similarity search library for indexing session and commit embeddings. FAISS
  • Lightweight database for storing session transcripts and commit metadata persistently. SQLite
  • Vector database for querying and retrieving similar AI decisions from history. Qdrant

Get Started with Bitloops.

Apply what you learn in these hubs to real AI-assisted delivery workflows with shared context, traceable reasoning, and architecture-aware engineering practices.

curl -sSL https://bitloops.com/install.sh | bash