If you’ve been running an AI agent with OpenClaw, you’ve probably hit the same wall I did — your agent forgets things. You have a great conversation on Tuesday, set up workflows, share context, and by Thursday it’s like talking to a stranger. Memory embeddings and vector indexing help, but they’re kind of a black box. You can’t actually see what your agent retained or how it’s connecting ideas.
In this video, Ron from the Boxmining team walks through how he solved this problem using Obsidian and GitHub — and honestly, it’s one of the most practical approaches I’ve seen for giving your AI agent a real, persistent memory.
The Memory Problem With AI Agents
Here’s the thing about memory embeddings: they work, but they’re opaque. Your agent converts conversations into vectors — essentially long strings of numbers — and retrieves relevant chunks when needed. It’s functional, but you never really know what got stored, what got lost, or how your agent is connecting different pieces of information.
Ron’s approach is different. Instead of relying solely on embeddings, he has his OpenClaw agent write daily summaries as plain text markdown files. Everything discussed in a 24-hour period gets captured in a structured, human-readable format. You can actually open the file and verify that your agent understood and remembered what you talked about. It’s like a shared journal between you and your AI.
Why Obsidian Changes Everything
Obsidian is a markdown-based note-taking app that’s become incredibly popular for building what people call a “second brain.” What makes it special for AI agents isn’t just the note storage — it’s the knowledge graph. Every note can link to other notes, and over time these connections form a web of related ideas that your agent can traverse.
According to a recent guide from NxCode, Obsidian’s local-first architecture makes it ideal for AI-powered knowledge systems because everything stays on your machine — no cloud dependency, no privacy concerns. When you pair it with an AI agent that writes to the vault daily, you’re essentially building a growing knowledge base that gets smarter over time.
Ron recommends two key Obsidian community plugins to supercharge this setup:
Smart Connections by Brian Petro — with over 836,000 downloads, this plugin adds semantic vector search and local model embeddings directly inside Obsidian. It lets your agent (and you) find related notes based on meaning, not just keywords. No API keys required, and it’s completely private. Think of it as giving your agent the ability to “connect the dots” across all your notes automatically.
QMD as MD — this plugin ensures your agent writes notes in a consistent, structured format that Obsidian can properly index and link. It’s a small thing, but format consistency is huge when you’re building a knowledge base that needs to scale.
The GitHub Backup Layer
The Obsidian vault on its own is great, but Ron adds GitHub as a backup and sync layer. Every note gets version-controlled, so you have a complete history of what your agent has learned and when. If something goes wrong, you can roll back. If you want to see how your agent’s understanding evolved over weeks, you can diff the files.
This combo — Obsidian for the knowledge graph, GitHub for version control — creates a persistent memory system that survives context window resets, session restarts, and even agent migrations. Several experienced users in the community have reported that by week four or five of using this setup, they see a noticeable improvement in their agent’s output quality. The agent starts producing responses that match their personal style and makes connections between topics that would have been impossible with embeddings alone.
Discord Architecture Over Chat Apps
One insight from the video that really stood out: Ron explains why Discord is superior to WhatsApp or Telegram for AI agent interactions. The channel-based architecture lets you keep topics separated. You can have a dedicated channel for crypto research, another for trading journal entries, another for daily news briefings — each with its own focused context.
With a single chat thread on WhatsApp, all your topics get mixed together, which muddies the context window and degrades output quality. Discord’s structure naturally organizes information the way your agent needs it.
Practical Workflows: News Briefings and Research
Ron shares two workflows he’s built on top of this system. The first is a daily morning news briefing — a cron job that runs at 7 AM Hong Kong time, using Brave Search API to pull from crypto-specific sites, X (Twitter), FinViz for market data, and TechCrunch for AI news. The agent compiles the top 10 most relevant items into a structured summary in the Obsidian vault, designed to be read in under two minutes.
If any item warrants deeper investigation, that’s where the second workflow kicks in: deep research using parallel sub-agents. OpenClaw can spawn multiple sub-agents simultaneously, each researching a different angle of a story. The results get compiled and stored back in Obsidian, building up the knowledge graph further.
The third use case Ron is working toward is a trading research assistant. Not an automated trader — he’s clear about that — but an agent that can analyze his trading journal, identify patterns in winning versus losing trades, and surface recurring variables he might miss. The key insight here is that you need to feed it real data from actual trades, not ask it to generate a profitable strategy from nothing.
The Takeaway
The Obsidian plus GitHub approach isn’t just a memory hack — it’s a fundamentally different way of thinking about AI agent persistence. Instead of hoping your embeddings capture the right context, you’re building a structured, searchable, version-controlled knowledge base that grows with every conversation.
If you’re a beginner with OpenClaw (or any AI agent framework), this is probably the single highest-impact improvement you can make. Start saving daily conversation summaries to Obsidian, back them up to GitHub, install Smart Connections, and give it a few weeks. The difference compounds over time.
Ron is documenting his entire journey on the BoxminingAI YouTube channel, so subscribe if you want to follow along as he refines this system week by week.