#2026-W08 ### On vacation 🏖️ with a robot 🤖 ![[me-robot-vacation.jpg]] I started traveling around the Balkans and documenting some of the scenery with the aim of creating a short video, one of the results of my [V2.lab](https://v2.nl/labprojects/microdosing-a-i) residency. I'm thinking about the format, whether it will be more of a video essay or a narrative short film about a robot. We'll see how it turns out, but you can already see a [small sample here.](https://vimeo.com/1166719525/a54e870483?share=copy&fl=sv&fe=ci) If you have thoughts on the best way to present it, let me know! ![[host-spomenik-1.jpg]] <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1166719525?h=a54e870483&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Guest | Outside"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> --- ### Machine as a tool One of the most interesting parts of this work *(again)* is finding ways to connect the robot to the human body. Last year I did this wonderful experiment at [Divadlo Štúdio Tanca](https://www.studiotanca.sk/en/), where a performer was [controlling the robot with his whole body](https://vimeo.com/kindl/embodiment?share=copy&fl=sv&fe=ci). He had to learn new ways of moving to produce specific movements in the robot *(partly because of poor parameter mapping)*. Back then I used MediaPipe to capture whole-body movement. Now I tried [Leap Motion](https://www.ultraleap.com/) for hand tracking only. What interests me most is the underlying dynamic. Once you get used to the robot's anatomy and start controlling it on your own terms, it starts to feel *less like remote control and more like a prosthetic*, a hammer in a craftsman's hand. In this setup, nothing acts as an agent; this is purely a tool that extends a single person's reach. [Watch the recent tests here.](https://vimeo.com/1166720795/74097ff701?share=copy&fl=sv&fe=ci) This maps onto the ["extended mind"](https://en.wikipedia.org/wiki/Extended_mind_thesis) idea from Andy Clark and David Chalmers, that tools become genuine extensions of our cognition when we learn to use them fluently. What makes this interesting *(to me at least)* is that the same machine can *easily* switch between being a tool and being an agent. **Controlled** and/or **autonomous**. <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1166720795?h=74097ff701&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="Tele-operating Guest"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> --- ### Autonomy 101 Also made some progress on autonomy this week. Even with experience from the [previous project](https://kindl.work/2025/System), I had to start from scratch with a lot of things. So I went back to basics, something as simple as a *vacuum cleaner navigating space.* Alongside the main script, I built a small web server that visualizes the robot's knowledge of its environment. A kind of map of the space it's in, built entirely from its own perception. [Watch it here.](https://vimeo.com/1166720205/7b6ab0cd02?share=copy&fl=cl&fe=ci) ![[host-navigator.jpg]] <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1166720205?h=7b6ab0cd02&amp;app_id=58479%2Fembed" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen="" frameborder="0" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></div> --- ### Generated interfaces Tools like [Claude Code](https://claude.com/product/claude-code) make custom development so much more accessible. In a few prompts I can put together a UI for visualizing inverse kinematics, a web server for monitoring the robot's environment, or a full iOS app for wireless control *(some examples below)*. You still need to understand the basics of what you're building and which protocols connect things. But once that's in place, it moves fast. I actually discovered this by accident, when I couldn't find a decent iOS OSC app to send the right messages to the RPi inside the robot. A few prompts later it was *"vibe-coded away."* **"The hottest new programming language is a human language."** And yet, the barrier to building tools is currently the lowest it's ever been. ![[host-ios-ui.jpg]] ![[host-web-ui.jpg]] --- ### A motorcycle for the mind [This recent podcast featuring Naval](https://www.youtube.com/watch?v=sXCKgEl9hBo) came up to me lately. A good conversation about where AI is and where it's going, with some actual thoughts on how the models work. What I found most refreshing is the optimism about where art and technology are going. The idea is that we suddenly have a partner for any project. No longer just an artist, but an **artist + 1**, who can reach further with these tools. I very much sympathize with this view. It's possible that a lot of the _average_ in the world will probably fall away _(average software, film, or even art),_ or get replaced entirely. But this might give exceptional works a real chance to stand out. > Or do you think the opposite? Are we the [lost generation](https://en.wikipedia.org/wiki/Lost_Generation) that, because of AI, will never be able to prove our talent? (Dolák) They also point out that AI might be the *most patient tutor ever made.* Most lessons, textbooks, even teachers struggle to meet you at your exact level. AI can. The domain of these foundation models is essentially *everything people have ever written or talked about.* No less, no more. They also highlight limitations such as hallucinations and biases, and encourage people to become early adopters, as this means knowing the current limitations and not pretending they don't exist. --- ### What Is Intelligence? ![[w2iibook.jpg]] Started reading [What Is Intelligence?](https://whatisintelligence.antikythera.org/) by Blaise Agüera y Arcas *(fully available online)*. The book puts **prediction** at the center of everything. *Prediction is all you need.* Once you define intelligence that way, thinking, understanding, and creating all become computational processes that can, in principle, be replicated. The framing is compelling, though I'm not sure reducing intelligence to prediction alone covers everything. Maybe the real issue isn't what intelligence *is*. Maybe it's about the words we use to talk about it. Long time ago, [Wittgenstein](https://en.wikipedia.org/wiki/Philosophical_Investigations) argued that a lot of philosophical confusion comes from language, not from the problems themselves. We use the same words, "thinking," "understanding," "consciousness," but sometimes for entirely different things. A lot of disagreement isn't really about substance. *It's about definitions.* And if you loosen your definitions just a little, weird things happen. We have no way to verify that another person is actually conscious. We just assume it, because *they look and behave as if they are.* There is no deeper test available to us. So, in a broader sense, you could say if a system looks conscious, responds as if it is conscious, how can we deny it? The book calls this bias _carbon chauvinism,_ the assumption that intelligence or life must be carbon-based. I wouldn't say machines have minds like ours. **But whether you call them "thinking" depends less on the machines and more on the language you choose to describe them with.** ---