![[max-_1.jpg]]
Visual programming refers to **creating software using graphical elements (nodes, blocks, wires) instead of writing text code.** These tools, often called _node-based_ or _patcher_ environments, let you connect functional building blocks to manipulate sound, visuals, and data in real time. This approach lowers the barrier for non-traditional programmers and emphasizes creative flow and improvisation.
---
![[max-_2.jpg]]
### Benefits of Visual Programming
- **Visual Clarity of Data Flow**
- _Immediate feedback_ and _visible data flow_. you can literally see what's happening in your network at any moment.
- **Real-time Iteration and Improvisation**
- Enables _rapid prototyping_ and designing _live on set_. This ties closely to _live performances_ where you need to react and adjust on the fly.
- **Modularity and Reusability**
- Build your _own tools_ and reuse them across projects. Create once, deploy many times.
- **Multimedia Integration (Connect Anything)**
- Deep integration with MIDI, OSC, and DMX protocols essential for live control. Easily interface with sensors, LiDARs, serial devices, and more.
- **Lower Learning Curve & Fewer Syntax Errors**
- You can start experimenting almost immediately with knowledge of just a few objects. No semicolons to forget.
- **Creative Thinking / Working / Flow**
- Maybe this is personal, but tools like this allow me to actually get into flow and experiment with patches. Once you feel comfortable with the software, you can use it to your advantage—rapidly prototype and almost "sketch" ideas without heavy pre-planning.
- **Live Performance Capability**
- Built from the ground up for real-time contexts where reliability and low latency matter.
---
![[max-_3.jpg]]
### Drawbacks of Visual Programming
- **The "Spaghetti" Effect**
- Sometimes the obvious issue—patches can become tangled messes that are hard to follow or debug.
- **Scalability and Complexity Management**
- In written code, you can organize everything into functions, classes, and files. In a big patch, everything can end up on one canvas (though subpatches help to some extent).
- **Version Control and Collaboration Challenges**
- Visual patches _lack effective means of versioning, diffing, or merging_. Good luck reviewing a pull request on a .maxpat file.
- **Some Algorithms Can Be Awkward**
- Most visual tools lack clean `for`/`while` constructs. Certain programming patterns—complex loops, recursive algorithms, advanced data structures—are often more awkward to implement visually.
- **Performance and Low-level Control**
- Visual tools add an abstraction layer between you and the machine. You're somewhat at the mercy of how efficient the visual runtime is. There's always some _interface layer_ between you and the final code.
- **Development Tools & AI Assistance**
- A lot of textual code nowadays is being generated by AI, which isn't strictly the case with visual programming. That said, some models are already capable of generating whole Max patches, so it's just a matter of time before this becomes more integrated. There are also ways to use MCP servers with AI agents already.
---
![[max-_4.jpg]]
Luckily, visual programming platforms also offer the ability to write code within them, so I often **combine visual patches with scripting** to get the best of both worlds.
---
![[max-_5.jpg]]
### Major Visual Programming Platforms
- **Max/MSP/Jitter**
- Commonly just called **Max**, this is one of the oldest and most influential visual programming environments for music and multimedia. Max has a _35+ year_ history and is widely used by composers, sound artists, and visual artists. "Max" refers to the core dataflow environment (originally for MIDI and control data), **MSP** adds audio processing extensions (real-time synthesis, effects, etc.), and **Jitter** handles video/graphics (images, video, 3D via OpenGL). Max is known for its strength in audio and interactive music systems, but it's truly multimedia—you can do visuals, control hardware, and more, all in one patch. It's highly extensible via third-party _externals_ (objects written in C/C++), and powers Ableton Live's "Max for Live" integration.
- **TouchDesigner**
- GPU-accelerated visual powerhouse. TouchDesigner (by Derivative) is a node-based environment primarily geared towards **real-time interactive graphics** and multimedia installations. Widely used for live visual performances, interactive installations, stage shows, projection mapping, and generative art.
- **Pure Data (Pd)**
- An open-source visual programming language very similar to Max. Pd shares the same paradigm: real-time audio processing, MIDI, etc., with a patcher interface. It's **free and cross-platform** _(including Linux and Raspberry Pi)_. Many media artists use Pd when budget or open-source philosophy is a concern. The interface is _less polished_ than Max, but it's quite powerful.
- **vvvv (v4)**
- Historically popular in Europe for interactive visuals and installations. Released in the early 2000s by MESO collective in Germany, it focuses on real-time graphics, generative design, and handling large media systems. The latest version (vvvv gamma) is built on .NET and offers a hybrid visual/textual approach.
- **Notch**
- A more specialized real-time visual creation tool, often used in the entertainment industry for concerts, events, and broadcast graphics. Known as a **powerhouse for real-time motion graphics and effects** with an approachable interface for artists. Notch provides a node-based workspace for 3D particle systems, post-processing effects, interactive visuals, and even simple AR/VR—with emphasis on high-quality, polished output.
- **Unreal Engine (Blueprints)**
- The popular Unreal Engine includes a visual scripting system called **Blueprints** (a node-based way to script game logic and interactions without writing C++). While Unreal is primarily a game engine, it's increasingly used in new media art, virtual production, and interactive installations (VR experiences, immersive environments). Blueprints allow artists and designers to create interactions by connecting nodes instead of writing text.
---
![[max-_6.jpg]]
### Max Environments Deep Dive
Max contains several specialized environments, each operating at different rates and serving different purposes:
- **MSP** – Audio signal processing at sample rate (~44,100 Hz)
- **Jitter** – Video/matrix/OpenGL processing at frame rate
- **Gen** – Low-level code generation that compiles to native code:
- **gen~**: Sample-by-sample audio processing (compiles to C)
- **jit.gen**: CPU-based matrix processing (compiles to C++)
- **jit.pix**: CPU-based image processing (compiles to C++)
- **jit.gl.pix**: GPU-accelerated image processing (compiles to GLSL)
- Key characteristics of Gen: no tilde in operators since everything runs at sample/pixel rate; all operations are synchronous (no hot/cold inlet distinction); all values are 64-bit floating point internally; auto-compiles in background during editing.
- Gen enables **single-sample feedback loops** impossible in standard MSP—essential for physical modeling, filter design, and waveshaping. Performance equals hand-written C/GLSL while the same patch compiles cross-platform (Mac/Windows, CPU/GPU).
- **RNBO** – Allows you to export your patches outside of Max into VST3, Raspberry Pi, Max externals, C++ Source, or straight to web.
---
![[max-_7.jpg]]
#### Project Examples
[[ZVUK]] – One of my first interactive pieces. I used Max for audio analysis and creating audio-reactive motion posters. Still a bit rooted in graphic design, but already moving towards interactivity. Back then it was running on 4 separate computers..
[[The Cycle of Light I]] – A collection of visuals made in Max. Worth noting the visual characteristics of Max Jitter at the time. There weren't easy tools for depth, blurs, and other effects like there are now.
[[Receptor]] – Max served as the main brain of the system here. In the first iteration, I ran it on a Mac mini, analyzing audio, sampling it, playing multichannel audio on 8 speakers, while also communicating with a microcontroller to read capacitive sensors and light LEDs through optical fibers. Later, I redesigned it and exported the whole patch to run just on a Raspberry Pi via RNBO.
---
![[max-_8.jpg]]
### TouchDesigner Operator Families
TouchDesigner organizes everything into distinct operator families, each handling a specific data type:
- **TOPs** (Texture Operators) – 2D image/video processing, GPU-based
- **CHOPs** (Channel Operators) – Motion, audio, animation data, control signals
- **SOPs** (Surface Operators) – 3D geometry and meshes
- **DATs** (Data Operators) – Tables, text, scripts, databases
- **MATs** (Material Operators) – Shaders and materials for 3D rendering
- **COMPs** (Component Operators) – Containers, UI elements, networks within networks
- **POPs** (Point Operators) – 3D data, such as points, particles, point clouds, and geometry
---
![[max-_9.jpg]]
#### Project Examples
[[Cyklus Svetla]] – Actually a redesigned version of [[The Cycle of Light I]], where I worked on the same concept but had to adjust visuals for a much larger screen. TouchDesigner gave me much more flexibility and power for creating visuals at this scale.
[[Exhibizz]] – A large multimedia installation where we used TouchDesigner as the main brain. Analyzing inputs from cameras, creating generative visuals, and generating AI animations in real time.
[[Summit]] – An event where we exported insane resolutions from TouchDesigner (160m, 360-degree LED wall). Had the opportunity to deeply explore TouchDesigner's capabilities in generative graphics.
[[OKO]] – A kinetic installation where TouchDesigner ran on a Mac mini and controlled the entire system: startup, calibration, projector control, visual generation, AI LLM integration, and safe shutdown of the whole system.
---
![[max-_10.jpg]]
### Choosing the right tool
When I was starting out, I spent a long time deliberating which software should I use and invest my time into learning? In the end, I learned both to a functional degree, because each has its strengths and their capabilities overlap so much that knowing both gives me flexibility.
For me, I always had this weird problem with TouchDesigner's interface. It looks like something from a train simulator. But maybe that's just me. Max looks so much cleaner, and it's a joy to use anytime. Currently, I often use **Max for audio analysis, synthesis, and control logic**, and keep **TouchDesigner mostly for visual artworks**.
---
![[max-_11.jpg]]
### Communication Between Tools
Fortunately, most of these platforms also support a wide range of communication protocols, making it extremely easy to share information between them. This interoperability means you can use the best tool for each job and connect them seamlessly.
---
### Max (Deep Dive)
![[max-_12.jpg]]
![[max-_13.jpg]]
![[max-_14.jpg]]
![[max-_15.jpg]]
![[max-_16.jpg]]
![[max-_17.jpg]]
![[max-_18.jpg]]
![[max-_19.jpg]]
---
[[Texts]] [[Interactive]] [[Generative]]