Tag: obsidian

  • OpenClaw Memory Problem SOLVED: Stop Wasting Time Explaining

    OpenClaw Memory Problem SOLVED: Stop Wasting Time Explaining

    If you’ve been using OpenClaw for more than a day, you’ve probably hit the same wall we did: you spend hours telling your AI agent everything about yourself, your preferences, your workflow — and then the next morning, it wakes up and has no idea who you are. In this video, Ron and I break down exactly how to fix this memory problem so you can stop wasting time re-explaining yourself every single session.

    The Problem: Your AI Agent Has Amnesia

    Here’s the thing most people don’t realize about AI agents — they’re not continuously running with perfect recall. The best way to think about it is that every morning, your AI wakes up fresh. The session resets, and it essentially forgets everything from the day before. It then has to read through its notes and files to piece together who it is and what you’ve been working on.

    This is a fundamental limitation of how large language models work. They operate within a context window — a fixed amount of text they can process at once. The larger that context window, the more expensive each interaction becomes. So you can’t just dump your entire conversation history into the prompt every time. Back in January, a lot of tutorials were telling people to just blast the AI with their life story so it would understand them better. Ron tried exactly that, spent two hours pouring everything in, and the next day? Gone. All of it.

    Solution #1: Semantic Search with Embeddings

    The first and most important thing you should enable on your OpenClaw agent is semantic memory search. This is the game-changer that turns your forgetful AI into something that can actually recall past conversations when it needs to.

    Here’s how it works: OpenClaw takes your entire conversation history and converts it into embeddings — essentially numerical representations of the meaning behind your words. These embeddings get stored in a vector database (OpenClaw uses SQLite with the sqlite-vec extension). When your agent needs to remember something, it performs a semantic search across all those stored embeddings to find the most relevant past conversations.

    The key insight is that your agent doesn’t load everything into memory all the time. It only searches when it needs to — when you ask it something that requires past context. This keeps costs manageable while still giving you access to months of conversation history. OpenClaw actually uses a hybrid approach: about 70% vector (semantic) search combined with 30% BM25 keyword search. The vector search handles conceptual matches where wording differs, while BM25 catches exact terms like error codes or function names.

    By default, OpenClaw supported OpenAI’s embedding models, but the newest updates have added Mistral as an embedding provider. This is a big deal because Mistral’s embeddings are cheaper to run over time. Research shows that Mistral’s embedding model actually achieved the highest accuracy at 77.8% in retrieval benchmarks, while being more cost-effective than OpenAI’s offerings. If you’re running your agent daily and building up months of conversation data, those cost savings add up fast.

    Solution #2: QMD (Quantized Memory Documents)

    The second approach we’ve been testing is something called QMD — Quantized Memory Documents. This is another way to handle memory that can be cheaper than pure embedding-based search.

    Think of QMD as a more compressed, efficient way to store and retrieve memories. Instead of embedding every single line of conversation, QMD creates condensed document summaries that capture the essential information. It’s still in the experimental phase for many users, but it’s showing promise as a cost-effective alternative, especially for agents that accumulate massive amounts of conversation data over time.

    Solution #3: Skills — Teaching Your Agent Permanent Abilities

    The third approach is one of my favorites: skills. If there’s something your agent needs to do repeatedly — every single day — it should have a dedicated skill for it. I like to think of it like Neo learning kung fu in The Matrix. You just plug in the skill, and boom, your agent knows how to do it.

    A skill in OpenClaw is essentially a plain-text instruction file that lives in your agent’s skill directory. You can tell your agent to write a skill, refine it over time, and even connect to your server via SSH (using something like Termius) to manually inspect and edit the skill files. Since they’re written in plain English, you can understand exactly what your agent is doing and tweak it as needed.

    Skills are different from memory in an important way: memory is about recalling past conversations and context, while skills are about permanent capabilities. Your agent will always load its relevant skills at the start of a session, so it never “forgets” how to do something you’ve taught it.

    Ron’s Approach: Obsidian Integration for Daily Summaries

    Ron and I actually have completely different approaches to memory, which I think highlights how personal this whole setup is. I tend to care less about memory and more about having a solid project plan — as long as the agent knows what we’re building, I’m good. Ron, on the other hand, is meticulous about how his agent remembers things.

    His approach goes beyond just embeddings and semantic search. He’s set up his OpenClaw agent to save a full summary of every chat session as a plain text file and push it to Obsidian — the popular note-taking app. This creates a human-readable archive of everything the agent has discussed, which serves as both a backup and an additional memory layer. It’s a clever approach that combines AI memory with traditional note-taking, and it’s something we haven’t covered in depth on this channel yet.

    Getting Started: Don’t Skip Memory Setup

    The TL;DR here is simple: make sure you have some form of memory enabled on your OpenClaw agent, or it’s going to forget everything. Whether you go with semantic search (the most common approach), experiment with QMD, build out skills, or combine all three — the important thing is that you don’t leave your agent running with zero memory infrastructure.

    If you’re just getting started, I’d recommend enabling semantic search first. It’s the most straightforward to set up and gives you the biggest immediate improvement. From there, you can layer on skills for repetitive tasks and explore QMD as your usage grows.

    We’re planning more detailed guides on each of these approaches, so drop a comment on the video if there’s a specific memory setup you’d like us to walk through. This channel is all about solving real problems with real solutions — and memory is definitely one of the biggest pain points we’ve all been dealing with.

  • Obsidian FIXED My OpenClaw Memory Loss — Here’s How

    Obsidian FIXED My OpenClaw Memory Loss — Here’s How

    If you’ve been running an AI agent with OpenClaw, you’ve probably hit the same wall I did — your agent forgets things. You have a great conversation on Tuesday, set up workflows, share context, and by Thursday it’s like talking to a stranger. Memory embeddings and vector indexing help, but they’re kind of a black box. You can’t actually see what your agent retained or how it’s connecting ideas.

    In this video, Ron from the Boxmining team walks through how he solved this problem using Obsidian and GitHub — and honestly, it’s one of the most practical approaches I’ve seen for giving your AI agent a real, persistent memory.

    The Memory Problem With AI Agents

    Here’s the thing about memory embeddings: they work, but they’re opaque. Your agent converts conversations into vectors — essentially long strings of numbers — and retrieves relevant chunks when needed. It’s functional, but you never really know what got stored, what got lost, or how your agent is connecting different pieces of information.

    Ron’s approach is different. Instead of relying solely on embeddings, he has his OpenClaw agent write daily summaries as plain text markdown files. Everything discussed in a 24-hour period gets captured in a structured, human-readable format. You can actually open the file and verify that your agent understood and remembered what you talked about. It’s like a shared journal between you and your AI.

    Why Obsidian Changes Everything

    Obsidian is a markdown-based note-taking app that’s become incredibly popular for building what people call a “second brain.” What makes it special for AI agents isn’t just the note storage — it’s the knowledge graph. Every note can link to other notes, and over time these connections form a web of related ideas that your agent can traverse.

    According to a recent guide from NxCode, Obsidian’s local-first architecture makes it ideal for AI-powered knowledge systems because everything stays on your machine — no cloud dependency, no privacy concerns. When you pair it with an AI agent that writes to the vault daily, you’re essentially building a growing knowledge base that gets smarter over time.

    Ron recommends two key Obsidian community plugins to supercharge this setup:

    Smart Connections by Brian Petro — with over 836,000 downloads, this plugin adds semantic vector search and local model embeddings directly inside Obsidian. It lets your agent (and you) find related notes based on meaning, not just keywords. No API keys required, and it’s completely private. Think of it as giving your agent the ability to “connect the dots” across all your notes automatically.

    QMD as MD — this plugin ensures your agent writes notes in a consistent, structured format that Obsidian can properly index and link. It’s a small thing, but format consistency is huge when you’re building a knowledge base that needs to scale.

    The GitHub Backup Layer

    The Obsidian vault on its own is great, but Ron adds GitHub as a backup and sync layer. Every note gets version-controlled, so you have a complete history of what your agent has learned and when. If something goes wrong, you can roll back. If you want to see how your agent’s understanding evolved over weeks, you can diff the files.

    This combo — Obsidian for the knowledge graph, GitHub for version control — creates a persistent memory system that survives context window resets, session restarts, and even agent migrations. Several experienced users in the community have reported that by week four or five of using this setup, they see a noticeable improvement in their agent’s output quality. The agent starts producing responses that match their personal style and makes connections between topics that would have been impossible with embeddings alone.

    Discord Architecture Over Chat Apps

    One insight from the video that really stood out: Ron explains why Discord is superior to WhatsApp or Telegram for AI agent interactions. The channel-based architecture lets you keep topics separated. You can have a dedicated channel for crypto research, another for trading journal entries, another for daily news briefings — each with its own focused context.

    With a single chat thread on WhatsApp, all your topics get mixed together, which muddies the context window and degrades output quality. Discord’s structure naturally organizes information the way your agent needs it.

    Practical Workflows: News Briefings and Research

    Ron shares two workflows he’s built on top of this system. The first is a daily morning news briefing — a cron job that runs at 7 AM Hong Kong time, using Brave Search API to pull from crypto-specific sites, X (Twitter), FinViz for market data, and TechCrunch for AI news. The agent compiles the top 10 most relevant items into a structured summary in the Obsidian vault, designed to be read in under two minutes.

    If any item warrants deeper investigation, that’s where the second workflow kicks in: deep research using parallel sub-agents. OpenClaw can spawn multiple sub-agents simultaneously, each researching a different angle of a story. The results get compiled and stored back in Obsidian, building up the knowledge graph further.

    The third use case Ron is working toward is a trading research assistant. Not an automated trader — he’s clear about that — but an agent that can analyze his trading journal, identify patterns in winning versus losing trades, and surface recurring variables he might miss. The key insight here is that you need to feed it real data from actual trades, not ask it to generate a profitable strategy from nothing.

    The Takeaway

    The Obsidian plus GitHub approach isn’t just a memory hack — it’s a fundamentally different way of thinking about AI agent persistence. Instead of hoping your embeddings capture the right context, you’re building a structured, searchable, version-controlled knowledge base that grows with every conversation.

    If you’re a beginner with OpenClaw (or any AI agent framework), this is probably the single highest-impact improvement you can make. Start saving daily conversation summaries to Obsidian, back them up to GitHub, install Smart Connections, and give it a few weeks. The difference compounds over time.

    Ron is documenting his entire journey on the BoxminingAI YouTube channel, so subscribe if you want to follow along as he refines this system week by week.