The open-source AI coding terminal deserves open-source architectural intelligence.
OpenCode gives you full control over your AI workflow — but manually maintaining context files for each provider is a tax that grows with your team. Bitloops automates context engineering for OpenCode so every session starts informed, regardless of which model you're using.
curl -sSL https://bitloops.com/install.sh | bashWhat is OpenCode?
OpenCode is an open-source, terminal-based AI coding assistant that brings the power of multiple LLM providers into a single, extensible CLI experience. Built by the community and fully transparent, it supports OpenAI, Anthropic, Google, and other providers — letting developers choose the model that best fits each task without switching tools. OpenCode focuses on developer control, customisability, and transparency: you choose the model, the workflow, and the rules. Its extensible architecture supports custom configurations and plugins. Developers use opencode.md files and @File references to provide context, but maintaining that context across providers and sessions is the problem Bitloops solves.
Fully open source and transparent
Open-source codebase with an active community — inspect, modify, and extend every aspect of the tool. No proprietary black boxes in your development workflow.
Multi-provider model support
Works with OpenAI, Anthropic, Google, and other LLM providers out of the box — switch models for different tasks without switching tools or losing your workflow.
Terminal-native developer experience
Runs entirely in your terminal — lightweight, fast, and integrated with your existing command-line workflow. No IDE dependency required.
Extensible plugin architecture
Customisable configuration and plugin system let you tailor OpenCode to your exact development workflow, team conventions, and integration requirements.
You're already doing this. Just manually.
OpenCode's flexibility is its strength — but it means context management falls entirely on you. Teams maintain an opencode.md for session-level context and an AGENTS.md so that context works when colleagues use Claude Code, Codex, or other tools. Switching AI providers mid-project makes it worse: context written for one model doesn't always translate to another. Manual context engineering breaks down at scale.
What you're maintaining today
The problems with this approach
opencode.mdProject context and system prompt configuration maintained manually for OpenCode sessions — needs updating as your project and architecture evolve
Context doesn't follow model switches
Switching between AI providers mid-project means your carefully written context may not land the same way. You end up re-tuning context for each model.
AGENTS.mdA shared context file readable by OpenCode and other AI agents — essential when your team uses multiple tools or switches between providers
Rules without reasoning
Your context files tell the model what to do, not why your codebase is structured the way it is — limiting how well any model can adapt to novel situations.
@File#L37-42Manually referencing files and specific line ranges in each session using OpenCode's @ syntax (Cmd+Option+K to insert) — repeated for every relevant file, every session
Manual maintenance doesn't scale
As your project grows and your team expands, keeping context files current becomes a part-time job. Files drift, get forgotten, or contradict each other across developers.
What you're maintaining today
opencode.mdProject context and system prompt configuration maintained manually for OpenCode sessions — needs updating as your project and architecture evolve
AGENTS.mdA shared context file readable by OpenCode and other AI agents — essential when your team uses multiple tools or switches between providers
@File#L37-42Manually referencing files and specific line ranges in each session using OpenCode's @ syntax (Cmd+Option+K to insert) — repeated for every relevant file, every session
The problems with this approach
Context doesn't follow model switches
Switching between AI providers mid-project means your carefully written context may not land the same way. You end up re-tuning context for each model.
Rules without reasoning
Your context files tell the model what to do, not why your codebase is structured the way it is — limiting how well any model can adapt to novel situations.
Manual maintenance doesn't scale
As your project grows and your team expands, keeping context files current becomes a part-time job. Files drift, get forgotten, or contradict each other across developers.
Why OpenCode users need Bitloops
OpenCode gives you freedom to choose your AI provider, but each session starts without architectural context — and switching providers makes the problem worse. Bitloops adds a persistent, model-agnostic context engineering layer that works across providers, sessions, and team members.
Replaces your opencode.md and AGENTS.md
Stop manually maintaining context files for each provider and session. Bitloops builds a persistent, model-agnostic context layer that works regardless of which LLM provider OpenCode is using.
Persistent memory across sessions and providers
Every conversation and decision is captured and linked to git commits — context carries forward even when sessions end or you switch between AI models.
Architecture-aware code generation
Bitloops feeds your project's software architecture patterns and constraints into OpenCode, so generated code respects your design decisions regardless of the underlying model.
Open source meets open source
Both Bitloops and OpenCode are fully open source — complete transparency, no vendor lock-in, and full developer control over your AI-assisted development stack.
Set up in 60 seconds
Install the Bitloops CLI
One command to install Bitloops on macOS, Linux, or Windows. Works with Homebrew, curl, and Cargo.
curl -sSL https://bitloops.com/install.sh | bashInitialize your repository
Run bitloops init in your project to set up the context engineering layer. Bitloops detects your project structure and AI tools automatically.
bitloops initUse OpenCode as usual
Bitloops runs locally in the background — capturing reasoning, linking decisions to git commits, and building your project's semantic context graph. Your OpenCode workflow stays unchanged.
Everything you get with Bitloops + OpenCode
Automatic decision capture
Every OpenCode conversation — regardless of which LLM provider is active — is recorded and linked to the resulting code changes, building a provider-agnostic development history.
Provider-agnostic context injection
Bitloops feeds architectural context into every OpenCode session regardless of the underlying model — consistent context engineering whether you're using OpenAI, Anthropic, or Google.
Semantic codebase model
Builds a structured graph of your codebase — modules, APIs, architectural boundaries — that enriches any AI provider's understanding of your project.
Commit-level AI attribution
Every git commit knows which OpenCode conversation and which model produced it. Reviewers see the full reasoning chain across providers.
Architectural constraint enforcement
Define your project's architectural rules and design patterns once. Bitloops enforces them across all AI-generated code — regardless of which model is active in OpenCode.
Fully open-source stack
Both Bitloops and OpenCode are open source. Your entire AI-assisted development stack is transparent, auditable, and free from vendor lock-in.
Also works with
Bitloops integrates with all major AI coding agents.
Get Started with Bitloops.
Apply what you learn in these hubs to real AI-assisted delivery workflows with shared context, traceable reasoning, and architecture-aware engineering practices.
curl -sSL https://bitloops.com/install.sh | bash