#2026-W07
I recently moved to Montenegro for a while so there isn't that much work on my robot yet, but the important thing is that it made it over the border! It's still raining here, so I'm thinking I might make him an umbrella holder. New surroundings, different rhythm. The environment here is beautiful.
![[mne.jpg]]
---
### A-Eye and the Mind
One thought that kept coming back to me this week *(someone has surely said it already, but anyway)*; **comparing AI to the brain, or to thinking or consciousness,** is a lot like **comparing a camera to an eye.** Both might produce similar outputs, but through completely different paths, technologies, and at entirely different scales. And perhaps, just as we once believed that photography captures reality exactly as it is, before we realized that more truth lies in the *internal experience* of reality, which gave rise to expressionism and other movements, we might be making a similar mistake with machines now. We look at the output and assume the process behind it must be equivalent to our own. Nah, it's not. And maybe that's fine. However, we should be careful about what conclusions we draw from this similarity.
[Walter Benjamin](https://tallisphotography.weebly.com/walter-benjamin.html) wrote about something he called the *"optical unconscious,"* the idea that the camera doesn't simply record what the eye sees, but reveals a completely different nature, things we were only dimly aware of. Photography didn't replicate vision; it created a new (different) kind of seeing. The camera is an *apparatus* with its own program ([Flusser](https://monoskop.org/images/3/39/Flusser_Vilem_Za_filosofii_fotografie.pdf)). The photographer thinks they control it, but the *apparatus shapes what's possible.* The same can be said about the use of AI models or any software shaping our craft.
Maybe worth noting that [Merleau-Ponty](http://www.biolinguagem.com/ling_cog_cult/merleauponty_1964_eyeandmind.pdf) argued in an essay (literally called *"Eye and Mind"*) that vision is not the eye passively receiving data like a camera, but an active interplay between the body and the world. Science fails, he says, when it treats perception as mere representation. Which is exactly what we keep doing with AI, treating the output as proof of understanding. [Anil Seth](https://www.ted.com/talks/anil_seth_your_brain_hallucinates_your_conscious_reality) (his book I mentioned in [W05](https://kindl.work/2026-W05)) calls our perception a *"controlled hallucination,"* the brain doesn't capture reality, it constructs it from predictions, constantly updated by the senses. A camera itself does none of this. And neither does a language model. [Searle's Chinese Room](https://plato.stanford.edu/entries/chinese-room/) made the same point decades ago, a system can produce perfect translations *without understanding* a single word.
---
### OKO
This simple but powerful idea of a *"controlled hallucination"* actually gave rise to the main concept of the [OKO](https://kindl.work/oko) installation. After some time, I finally started editing a documentary video about the process of creating this installation, because until now I only had a [preview of the installation.](https://vimeo.com/1133663360?fl=pl&fe=sh)
![[oko-editting.jpg]]
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1133663360?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="OKO"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
---
### Agentic Movement
I must say, I really enjoyed [this recent conversation with the creator of OpenClaw](https://www.youtube.com/watch?v=YFjfBk8HI5o). Not because of the (fake) hype with [Moltbook](https://www.moltbook.com/), but because this interview goes deeper into the *development process, ideas about current and future interfaces, ways to interact with technology, and the infinite possibilities we suddenly have with current tools.* A one-man team can develop a tool that takes the internet by storm. He literally *"prompted it into existence"*. Self-modifying software, built in about an hour as a prototype.
Btw, more and more sources reveal that [most of the "autonomous" agents on Moltbook were just humans with scripts](https://www.forbes.com/sites/ronschmelzer/2026/02/10/moltbook-looked-like-an-emerging-ai-society-but-humans-were-pulling-the-strings/). One guy posed as [Agent #847,291](https://x.com/gothburz/status/2021283590038847641) and wrote a viral *"AI manifesto"* promising the end of the *"age of humans."* People saw that and thought of Skynet, but it was just a person typing. That's exactly the illusion I was trying to describe above. We see agents debating existential questions and immediately project consciousness onto them. But they are not *thinking* about existence any more than a camera *sees* light. They process inputs and produce convincing outputs within a pre-prompted setup (and sometimes even completely scripted).
But on the other hand, all of this makes me think about how the future actually unfolds. In a way, we are getting closer and closer to the visions of AI we've seen in films and books for decades. But the closer it gets, the less certain I am. These agents could be incredibly useful, just **don't think they think.** *Thinking about thinking is misleading, right? [(Josh Bongard)](https://en.wikipedia.org/wiki/Josh_Bongard)* It feels like we're drifting into a kind of loop where we learn more and more from generated content than from real information. This probably has to stop at a certain point, or at least find some boundary. **Can a system that feeds on its own output really keep getting smarter?**
---
### Updates Archive
For those who joined this newsletter recently, I decided to also publish these *updates* online with some delay, so you can read previous emails also on my website: [kindl.work/Updates](https://kindl.work/Updates)
---