#2026-W10
*One-person companies are not far away, so I decided to take a proper look at the whole situation with omnipresent AI agents and ways to implement them into daily life. Safely and, most importantly, usefully.*
### The Agent Boom
I started the week with research into what's currently out there. You've probably all noticed the boom around tools like [OpenClaw](https://github.com/openclaw/openclaw), the open-source AI agent by [Peter Steinberger](https://steipete.me/). OpenClaw runs locally on your machine and connects to your messaging platforms *(WhatsApp, Telegram, Signal, Discord)*, and through a language model it can read emails, manage your calendar, search the web, and carry out tasks you describe in natural language. On top of that, projects like [NanoClaw](https://github.com/qwibitai/nanoclaw) emerged as lightweight alternatives, built on [Anthropic's Agent SDK](https://github.com/anthropics/claude-agent-sdk-python), with the entire core reduced to about 500 lines of TypeScript and each agent running in an [isolated container](https://nanoclaw.dev/blog/nanoclaw-security-model/) for security.
It all sounds great. But I quickly realized that on its own, this won't help me much. An agent without context just randomly spams you on WhatsApp. It doesn't know about your projects, your priorities, your way of working. So I tried a different approach.
---
### Second Brain
![[scndbrain.jpg]]
You might know that I've been keeping notes in [Obsidian](https://obsidian.md/) for years, loosely following the [Zettelkasten method](https://zettelkasten.de/overview/). The idea is simple: you write one idea per note, store it in a vault, and connect notes to each other through links and tags. Over time, a **structure emerges** where information is *interlinked and builds on itself.* Obsidian is built entirely on `.md` (Markdown) files, a lightweight markup language that is readable as plain text but can also be rendered as HTML, PDF, or anything else. No proprietary formats, no vendor lock-in, just text files you can open in any editor on any machine. *(This turns out to be important later.)*
[[Obsidian|Years ago I put together a short presentation about Obsidian and this method of note-taking, a quick onboarding (in czech) if you're curious.]]
![[Obsidian.pdf]]
---
### The Garden of Eve
So I started thinking about how this Obsidian vault could become the center of the AI integration I was working on. I quickly realized it might be a good way to approach things, because **context, memory, and knowledge** are sometimes much more important than the intelligence of the individual models you work with.
The first attempt was building a kind of studio operating system with an agent **EVE** at its center. Originally I had plans for 4-6 specialized agents, each handling a different domain, but I started with one. EVE was a Python agent that managed studio operations. It watched the Obsidian vault for new tasks, listened on a [Telegram](https://telegram.org/) bot, called language models via [LiteLLM](https://github.com/BerriAI/litellm), and wrote the results back as structured notes into the vault.
I tried to give her as much working context as possible and built several autonomous scripts and skills so she could react to Telegram messages, pick up new tasks from the vault, and produce useful outputs. I also thought about *approval gates*, a way to manage what information needs human review before going out and what can be published or acted on directly.
```
┌────────────────────────────┐
│ INPUTS │
│ Telegram · Inbox · Cron │
└──────┬─────────┬───────┬───┘
│ │ │
▼ ▼ ▼
┌────────────────────────────┐
│ EVE (Python) │
│ │
│ Router → Skill → Context │
│ → LLM → Validation │
│ → Approval (A/B/C/D) │
│ → File write → Log │
└──────┬───────────────┬─────┘
│ │
▼ ▼
┌────────────┐ ┌────────────┐
│ Obsidian │ │ APIs │
│ Vault │ │ Claude │
│ │ │ Ollama │
│ │ │ Mem0 │
│ │ │ ChromaDB │
└────────────┘ └────────────┘
```
I even set things up so Eve could run *24/7* on a Mac Mini in Czechia without needing to be on my local machine, and I had a solution for switching between cloud and local models depending on security needs. But after a few days of actual use, I realized it was creating more work than it was saving. I kept having to explain what I actually expected, correct the outputs, restructure things *it* got wrong. The assistant was busy, but not useful.
---
### Back to the Source
I had to admit I got carried away. Having a non-stop assistant was fun, but it was mostly doing useless things. So I went back to sketching.
During my research I noticed two things. First, [Obsidian released its CLI](https://obsidian.md/changelog/), which made working with the vault from the terminal dramatically faster. What used to take 15 seconds through grep now takes a fraction of a second, because the CLI [accesses Obsidian's internal index directly](https://prokopov.me/posts/obsidian-cli-changes-everything-for-ai-agents/). Second, I realized I should use the vault I already have. It already contains all the context I need, I just had to clean out the personal stuff and structure the rest so a language model could navigate it.
The key insight was this: **I need to know what the agent knows and what information it has access to.** Until that's clear, you don't understand the system and the system doesn't understand you, and it starts doing unexpected things.
So I did a big cleanup of the vault, kind of *"gardening."* I restructured the entire folder system into clear categories, unified file naming conventions, added metadata to hundreds of notes, and filtered out personal content. Then I dropped [Claude Code](https://code.claude.com/docs/en/overview) into the root of that vault, and it turned out to be enough. Claude Code can search files intelligently, follow wikilinks between notes, create new entries, run scripts, *manage its own persistent memory across sessions*, and read the full context of any project at any moment.
```
┌────────────────────────────┐
│ CLAUDE.md │
│ │
│ Who am I. Structure. │
│ Naming rules. Research │
│ threads. Writing style. │
│ General rules. │
│ │
│ Read at every session │
│ start. No vector DB. │
└──────────────┬─────────────┘
│
┌─────────┴─────────┐
▼ ▼
┌──────────────┐ ┌──────────────┐
│ Commands │ │ Mac Mini M4 │
│ (markdown) │ │ (always-on) │
│ │ │ │
│ /digest │ │ 06:00 digest │
│ /changelog │ │ 23:55 log │
│ /garden │ │ auto-commit │
│ /research │ │ │
│ /inbox │ │ Obsidian │
│ /memory │ │ Ollama 14B │
│ ... │ │ Tailscale │
└──────────────┘ └──────────────┘
│
▼
┌────────────────────────────┐
│ Obsidian Vault │
│ │
│ Projects · Areas · │
│ Resources · Archive │
│ │
│ Every note is a file. │
│ Every file is readable. │
│ No embedding pipeline. │
└────────────────────────────┘
```
At the core of the whole thing is one file: `CLAUDE.md`. It sits in the vault root and Claude Code reads it at the start of every session. It describes who I am, how the vault is organized, what naming rules apply, what writing style I use, which research threads I'm following, what must never be deleted.
I built several automated scripts that run daily and actually do something useful: a morning wrap-up of what happened, vault gardening, open call research, a digest of new inputs and automatic git commit, so no change is ever lost. The Mac Mini, which was sitting there unused anyway, now serves as the always-on backend that watches for changes and triggers these routines.
> [Obsidian](https://obsidian.md/) for CLI access, [Ollama](https://ollama.com/) with a local [Qwen 3 14B](https://huggingface.co/Qwen/Qwen3-14B) model for offline tasks, Claude Code in a persistent [tmux](https://github.com/tmux/tmux) session, and all the automated scripts. I access it from anywhere through [Tailscale](https://tailscale.com/) VPN. And with Claude Code's [Remote Control](https://code.claude.com/docs/en/remote-control) feature, I can interact with the full vault from my phone, from a laptop, from anywhere, and the model always has the full context, which only grows over time.
Swapping the model is one config line away. Switching to a local model is straightforward. And managing everything through markdown files is so universal that any future agent, regardless of who builds it, can orient itself in the same structure.
### What changed
The infrastructure collapsed into text.
Eve needed a vector database because it couldn't read the whole vault at once. It had to retrieve relevant pieces through similarity search. Claude Code just reads files directly, searches with grep, follows wikilinks. The *"memory problem"* that consumed a significant part of EVE's architecture simply disappeared, because my vault isn't that large, and a model reading the right files directly is faster than a retrieval pipeline guessing which files matter.
*Have you tried running any agents in your work? I'm curious what setups people are experimenting with and what actually sticks. Would you be interested into more details or is it boring.af?*
---
![[obsidian-books-films.jpg]]