#2026-W13
*This week in a "new coat." I am testing new HTML styling for the newsletter, so I hope everything displays correctly on your end. If something looks off, please let me know!*
---
### OKO / Making of
I finally got around to editing a short video from the making of the OKO installation. It was an attempt to collect all available material from the process and put it into some coherent form. I simply enjoy being able to look back at footage from the process after some time has passed. I'd be glad if you watch this short *build log.*
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1177640575?badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media; web-share" referrerpolicy="strict-origin-when-cross-origin" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="OKO | Making of"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
With the video out, it also felt like a good moment to look back at this project more broadly. Not a polished case study, more of a collection of notes and memories from the process for anyone curious.
---
### What OKO is
[[OKO]] is a *machine loaded with sensations,* continuously analyzing its immediate surroundings. Using computer vision and machine learning, it emulates a *"predictive" brain* that incessantly processes, forecasts and simultaneously revises its internal model of the world. A rotary head equipped with a camera and projector creates a *tight feedback loop.* The object observes its environment and projects back into it in real time.
The installation won the [Signal Calling 2025](https://signalfestival.com/en/signal-calling/), was fabricated by [PrusaLab](https://www.prusalab.cz/) (Prusa Research), and premiered at [Signal Festival 2025](https://signalfestival.com/) in Prague.
![[OKO__2.jpg]]
---
### From sketch to sphere
This was actually one of the first visualizations I created. The original idea was to place the object on the ground, projecting directly onto visitors, onto trees, or just into fog.
<div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1071571458?h=cb486ca2d3&badge=0&autopause=0&player_id=0&app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="OKO | EYE (Audio Visual Preview)"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script>
We soon decided to raise it to roughly 4.5 meters. That changed everything about the engineering, but it also opened a question, *what should this machine actually look like?* Initially I hoped to make some of the technical parts visible, but because we had to survive outdoor conditions, like wind, rain and temperature swings, we had to enclose most of the electronics to keep it safe.
There were three proposals. One looked like a vertical transmitter, something resembling electrical power lines. The second was a satellite-like sphere on four legs, which is what we went with. And the last one was a spider-like structure built from stage trussing.
![[oko-dsgns.jpg]]
![[Oko_03_05.jpg]]
---
### The Mini hack
The entire computer vision pipeline runs locally on a Mac Mini M4. [YOLO](https://docs.ultralytics.com/) for object detection, optical flow for motion, a local language model for scene descriptions. **No cloud, no internet during operation.**
But there was one problem. As far as we could find, there is no way to make a Mac Mini power on automatically from an external signal. The installation needed to start automatically and remotely, so we partially disassembled the Mac Mini and **hacked the power button.** The microcontroller, which powers on immediately when the object is plugged in, physically shorts the button contact through a relay.
From there the sequence is automated: boot macOS, launch [TouchDesigner](https://derivative.ca/) and [Ableton Live](https://www.ableton.com/), calibrate the motors by moving to endstops, return to default position, begin autonomous behavior.
![[OKO__36.jpg]]
---
### The last days before transport
**The rotating head weighs 100 kg.** Laser projector, IR camera, two stepper motors, fiberglass dome, all of it balanced on a two-axis mount with a slip ring passing power and data through the rotating joint.
By the time the object was nearly ready for transport, things started to get tricky. The motors (or rather their transmission) didn't have enough power to rotate the head reliably. We had to improvise a bit and add a *planetary gearbox*, which fortunately solved the problem. However, the very next day it seemed that something was wrong with the other axis of rotation as well. It turned out to be just a loose bearing from yesterday’s "operation". Finally, the day before transport, we noticed that the inside of the sphere was overheating because the air inside wasn’t circulating properly, so we had to add a few holes and an extra fan to help with circulation. Fortunately, we managed to complete all the modifications in time, and no further adjustments to the installation were needed during the event.
![[OKO__23.jpg]]
---
### Finishing touches
I was still finishing the visual content on site during the setup days. I was happy to see real people interacting with the projection to make final decisions about how the installation should look. Working outdoors meant people would naturally stop and talk. *A magician walked by one evening and started showing me card tricks while I was debugging.* Tourists, locals walking their dogs, curious about what was being built. That part was genuinely enjoyable.
I worked on the sound component with [Martin Štefánik](https://www.instagram.com/martin.majlo.stefanik/). We connected TouchDesigner and Ableton Live so the audio could react to *what the machine was seeing.* The four-channel spatial setup followed the projection, so whichever direction OKO was facing, the sound was slightly louder on that side. The music had four base compositions matching the *four modes,* with layered effects driven by the real-time vision data. We didn't manage to make it as generative as planned, so there is still room for improvement in the next iteration.
---
### Four modes of perception
The four modes developed gradually from research into how perception actually works. *Predictive processing, active inference,* the idea that we never passively receive sensory information but always predict, compare, and correct. Each mode maps onto a different phase of that process.
> [!meta]
> | | |
> |---|---|
> | **Observe** | Pure sensory input before it gets structured into information. The machine projects abstract flow visuals and patterns. *It is experiential, not yet categorized.* What you see is *raw* signal. |
> | **Name** | The system starts structuring what it perceives. Object detection activates, labels appear with categories and confidence scores, a local language model generates running descriptions of the scene. *Information gets names.* |
> | **Remember** | The machine continuously captures short snippets during operation and stores them. In this mode, it projects those recordings back, distorted and recombined. It plays with time, stretching and warping its own memories. *Not the present but the machine's version of the past.* |
> | **Reveal** | Our senses also *limit what we can perceive.* Light spectrum, resolution. This mode acts as a filter or a magnifying glass, projecting hidden structures and textures onto the concrete surface. Revealing things that seems invisible. |
![[OKO_visual_0.jpg]]
![[OKO_CV.jpg]]
---
### What’s next?
Looking back retrospectively, I really am glad for this opportunity. Next time, I would perhaps experiment more with the form. The mechanical parts fabricated by PrusaLab are genuinely beautiful, but nobody can see them at the moment.
I would also love to see this object in an environment where visitors could get *closer to it and really interact with the machine.* At a crowded festival, you cannot let people stand directly under a 180 kg structure, but in a gallery, an abandoned building or an open space with fewer visitors, it would be possible. This is where it worked best. *When someone stood close and the machine locked onto them.* I would like to see that again, somewhere with more room to get near it.
---
> [!meta]
> | | |
> |---|---|
> | **More at** | [Project Website](https://kindl.work/OKO) / [Credits](https://kindl.work/2025/OKO#Credits) |
> | **Object** | Autonomous kinetic installation, 2-axis motorized head, outdoor weatherproofed, 4-channel spatial audio |
> | **Structure** | Fiberglass dome (4 quarter-shells), aluminum frame, CNC duralumin mounts, ~4.5 m height |
> | **Operation** | Fully autonomous, single-button start/stop, no internet required |
> | **Head** | ~100 kg, continuous rotation + ±30° tilt, slip ring for power/data/video |
> | **Projector** | Panasonic PT-RQ7L laser |
> | **Camera** | Hikvision IR, night vision |
> | **Perception** | YOLO, optical flow, contour analysis, Llama 3.2 3B (local) |
> | **Computer** | Mac Mini M4, TouchDesigner + Ableton Live |
> | **Control** | STM32 MCU, serial over USB, endstop calibration, temp/humidity sensors |