*(This is an automatically translated text into English)* --- **Academy of Fine Arts and Design in Bratislava** Department of Visual Communication **System: The Ascent of a New Kind** *Written Part of Diploma Thesis* Bc. Antonín Kindl [VŠVU/AFAD](https://www.vsvu.sk/sk/), Bratislava 2025 **Supervisor:** doc. Mgr. art. Ján Šicko, ArtD. **Consultant Supervisor:** Mgr. art. Ingrid Ondrejičková Soboslayová, ArtD. --- Listen to [[System_A_New_Kind.mp3|Audio version]][^56] | Original [[Kindl_Antonin_System.pdf|PDF]] | Read thesis in [[Systém; Vzestup nového druhu|Czech]] | [[System|Final Project]] --- # Abstract **System: The Ascent of a New Kind** The project focuses on the development of autonomous machines and their ability to overcome the boundaries of pre-programmed behaviour through adaptive mechanisms. The research explores how algorithm-driven machines can faithfully fulfil their goals while simultaneously discovering new and unpredictable ways to achieve them. Through the application of evolutionary algorithms and feedback principles, entities emerge whose behaviour continuously evolves based on interaction with the environment and individual experience. The interactive installation forms an ecosystem in which these electronic organisms learn to survive. They seek energy sources, respond to environmental stimuli, and interact with each other, balancing on the thin line between artificial and biological systems. The project traces the transition from passive tools to active actors and aims to contribute to the discussion about the contemporary role of machines and their integration into our world. The installation creates a platform for observing emergent phenomena and the development of behaviour in artificial systems. ## Key Words Emergent Phenomena, Adaptive Behaviour, Artificial Systems, Interactive Installation, Electronic Organisms, Machine Learning --- # Introduction The following text emerges as a product of research focused on machines capable of development. From an initial fascination with evolving systems, this overarching theme has evolved into an extensive analysis of the current technological situation and an exploration of the relationship between humans and machines. We find ourselves on the threshold of a world that does not belong exclusively to humans. I consider the emerging age of autonomous machines that learn, adapt, and act independently to be one of the most significant technological and social milestones in recorded history. For most of history, machines were perceived as passive tools that extend human capabilities but themselves lacked any will or purpose. Today, however, we observe systems that initiate their own behaviour and make decisions with real impact on the world around us. In this work, I examine precisely such a transformation. I strive to view machines not only as *deterministic*[^1] computational systems but as actors who assert themselves even in the physical realm. In my research, both theoretical and practical, it becomes apparent that if we give machines sufficient autonomy and embodiment, a whole range of questions arises about responsibility, goals, and the nature of such entities. This transformation from mere tools into the role of autonomous actors forces us to think about fundamental human concepts. A kind of *"new species"* is entering our world, with which we will have to come to terms. Although we are currently in the position of creators of these entities, all the more should we be concerned with this issue so that it does not begin to move in the wrong direction. To better understand this moment of transformation, I began to engage with the theory of adaptive systems. This path led from historical insights of cybernetics, through various methods of system learning, to contemporary machine learning algorithms. In this text, I will focus primarily on key moments that contributed to the ability of machines to learn and decide independently. I will describe the importance of *feedback systems, cybernetics, and the phenomenon of evolutionary algorithms*, touch on the issue of *aligning goals* with human values, and outline the significance of *machine embodiment*, that is, the physical body that enables their presence in our world. In conclusion, I will focus on the possible impact of such transformation on society and how our understanding of machines and ourselves is changing. Generally, this is rather an introduction to the entire issue, with the aim of pointing out this phenomenon, explaining basic questions, and referring readers to supplementary sources. The entire text is also published [[Systém; Vzestup nového druhu|online]][^2] to preserve its dynamic form and offer multimedia content along with interconnected links. An English [[System_A_New_Kind.mp3|audio]][^3] version is also available, serving as a simplified exposition. All visual material is the work of the author unless otherwise stated. In addition to this text, I also create real autonomous machines, a kind of *"electronic organisms"* capable of perceiving the surrounding environment and reacting to it. Theoretical research thus naturally intertwines with the practical implementation of findings into the resulting interactive installation, which you can find on the [[System|project website]][^4]. I believe that art has the ability to make problems visible and point them out. Bruno Munari[^5] calls upon us in an ancient, almost futuristically sounding manifesto: > *"Artists are the only ones who can save mankind from this danger. Artists have to be interested in machines, have to abandon their romantic paint-brushes, their dusty palettes, their canvases and easels. They have to start understanding the anatomy of machines, the language of machines, their nature, and to re-route them into functioning in irregular ways to create works of art with the machines themselves, using their own means."* (Munari, 1938, online)[^6] For this very reason, my research focuses not only on theoretical but also on practical aspects of evolving machines. Since I deal with both the functionality of algorithms *(software)* and construction *(hardware)*, I am able to understand both fundamental components that determine the resulting behaviour of these entities. Although we commonly encounter similar adaptive algorithms in the digital world today, whether it's recommendation algorithms or advanced language models, we can truly observe the influence they have on us only when algorithms acquire a material form. And indeed, it was only during the construction of these objects that I began to realise the urgency of the problem. A machine that acts autonomously differs significantly from familiar technologies. Therefore, when creating the physical form, I also began to think about something non-human, which seems to be unpredictable in itself. The installation thus serves both as an imaginary warning about emerging technologies and as a platform for observing the development of behaviour in artificial systems. You can follow the creation process in the attached visual appendices after each chapter. Through this secondary line of text, I try to reveal important reference points that accompanied me throughout the work, whether it concerns understanding cybernetic principles through digital simulations, evolutionary algorithms controlling machines, or the actual construction and design of objects. ![[hostviz.jpg]] *Fig. 1, Electronic organism "Host"* --- # Transformation For a long time, there was an assumption about machines that any step they take is predetermined by the program, and thus that the machine has no initiative of its own. However, with the advent of algorithms capable of autonomous learning and adaptation[^7], mechanisms are born that overcome pre-programmed behaviour without the need for specific instructions. This concerns not only digital software but also *systems* that can adapt in real physical situations. Autonomous drones, vehicles, and even vacuum cleaners can already respond to the surrounding environment and internally evaluate situations. They can adapt and act according to their own "judgement" independently, without specific instructions from the programmer. The first cyberneticists noticed this ability of the system. Norbert Wiener[^8], considered the father of *cybernetics*[^9], emphasised that a machine equipped with a feedback loop and the ability to learn can act independently, without constant control by the constructor. He did not consider cybernetics as an independent science, but as an extensive field that helps other sciences understand system control and communication (both in biological and technical worlds). Cybernetics offers a kind of framework for understanding individual relationships within a system and their possible consequences. ![[feedbackviz.jpg]] > *Fig. 2, Feedback mechanism* ![[cyberviz.jpg]] > *Fig. 3, Cybernetic principles* > *"Feedback is a method of controlling a system by reinserting into it the results of its past performance. […] If, however, the information which proceeds backward from the performance is able to change the general method and pattern of performance, we have a process which may well be called learning."* (Wiener, 1989, p. 61)[^10] It soon became clear that the principle of feedback and adaptive learning enables flexible and dynamic behaviour of machines, similar to living creatures. A pioneer in this regard was also William Ross Ashby[^11], who in 1948 created a device called the *homeostat*[^12], one of the first that could adapt to the surrounding environment. The *homeostat* could automatically change its internal configuration through feedback to maintain a stable state under changing conditions, a kind of artificial "metabolism" maintaining internal variables in equilibrium. This black box of vacuum tubes and conductors appeared to contemporary observers as a *"synthetic brain"* because it adapted voluntarily without direct human intervention. This opened the door to understanding machines not just as passive tools, but as dynamic systems. We no longer need to perceive a machine as a strictly deterministic tool, but as something capable of self-regulation, adjustment, and adaptation to a given context. Research into feedback loops and adaptation thus led to a breakthrough in automation, robotics, and artificial intelligence, where systems learn and self-regulate in increasingly complex ways. An autonomous system (whether a robotic vacuum, self-driving car, or algorithm managing content on a social network) makes decisions that have real impact without each individual decision being directly controlled by a human. Decision-making is *delegated to the machine* and it reacts to external stimuli according to its own internal states (models, sensor data, learned preferences). In its principle, this *machine-actor* behaves similarly to an organism: it receives inputs, produces outputs, has internal processes and a defined goal. It performs actions that also affect its environment. <div class="iframe-wrapper"> <iframe id="RefreshMe" src="https://editor.p5js.org/kindl.work/full/lJBDBQGzE" scrolling="no" allow=""></iframe> </div> Still, someone designed and programmed this system — the machine's *agency*[^13] does not arise from nothing. It would be wrong to consider artificial intelligence as an active agent determining the future of its own development. There is no predetermined trajectory along which it moves (Narayanan; Kapoor, 2025). It is still technology that we have in our hands and we largely influence the direction it will take. Even so, it represents a significant shift in the perception of machines. Previously, devices were commonly spoken of as an "extended hand" of humans, while today we encounter situations where the machine acts to a large extent on its own and seems to fulfil its own goals. This increasingly blurs the boundary between tool and actor. However, it is important to realise that adaptive properties are not magic, but result from the nature of self-regulating systems. Feedback plays a key role — the mechanism where the system's output affects its input, enabling self-regulation and adaptation. Feedback is the basis of control in animals and machines (Wiener, 1948). Adaptive systems, whether a thermostat or recommendation algorithm, adjust their approach based on success. This sometimes makes them appear to exhibit purposefulness or intention. Adaptivity is the initial moment when a machine stops being merely an executor of instructions and becomes a "creator" of the resulting strategy. However, it is only the first step toward full autonomy. For a machine to become a truly independent actor, it also needs to learn and evolve similarly to biological organisms. <div class="iframe-wrapper"> <iframe id="RefreshMe" src="https://editor.p5js.org/kindl.work/full/OeqvJebF-" scrolling="no" allow=""></iframe> </div> > _Fig. 4, see_ https://editor.p5js.org/kindl.work/full/OeqvJebF- <div class="iframe-wrapper"> <iframe id="RefreshMe" src="https://editor.p5js.org/kindl.work/full/aMyjqEAYW" scrolling="no"></iframe> </div> > _Fig. 5, see_ https://kindl.work/Sketch+2023-10-16 <div class="iframe-wrapper"> <iframe id="RefreshMe" src="https://editor.p5js.org/kindl.work/full/J-QCF0Q6Z" scrolling="no" allow=""></iframe> </div> > _Fig. 6, Dynamic ecosystem simulation, see_ https://editor.p5js.org/kindl.work/full/J-QCF0Q6Z ![[Screenshot 2025-05-03 at 19.16.21.jpg]] > _Fig. 7, Example of emergent phenomena in nature, D. Dibenski. Auklet flock, 1986 [online]. Taken from:_ https://en.wikipedia.org/wiki/Swarm_behaviour <div style="padding:100% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/900295527?h=d831f68c01&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="03"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> > _Fig. 8, see_ https://kindl.work/Sketch+2024-01-05 <div style="padding:100% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/899947522?h=5b570cf9c0&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="2024-01-04"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> > _Fig. 9, Generative graphics based on Conway's "Game of Life", see_ https://kindl.work/Sketch+2024-01-04 --- # Evolution What happens when machines begin to *"show signs of life"?* At the moment we implement the ability of *learning* or *evolution* into machines, their behaviour ceases to be fully determined by the programmer. Machines thus move toward *autonomous actors* whose outputs cannot be easily predicted from code. So-called *evolutionary algorithms*[^14] mimic the principle of natural selection: they generate solution variants, randomly mutate or crossbreed them, and based on a *"fitness function"*[^15] select the most successful for the next "generation." What is surprising is that this method often brings solutions and procedures that a human would not easily think of. Instead of manually programming every detail of behaviour, only selection rules are defined and the strategies themselves emerge through repeated trial and error. Evolution, whether in biological or digital form, often brings non-trivial, *creative solutions*[^16] that can surprise even the developers themselves[^17]. <div class="iframe-wrapper"> <iframe id="RefreshMe" src="https://editor.p5js.org/kindl.work/full/Dh_qLSlsT" scrolling="no" allow=""></iframe> </div> > _Fig. 10, particles evolutionarily spread across the image surface and avoid barriers, see_ https://editor.p5js.org/kindl.work/full/Dh_qLSlsT Already in the 1990s, Karl Sims[^18] pointed to the possibilities of simulated evolution of "virtual creatures." Block-like bodies with limbs and simple joints evolved in the computer. These artificial creatures were able to learn to walk, swim, or jump in 3D space, even though the programmer did not explicitly tell them how to do it. Their specific form was often unique, sometimes bizarre, and sometimes very effective. Various situations arose where simulated creatures moved vigorously or competed with each other for an object without receiving instructions on how to best achieve specific goals. We call such behaviour *"emergent"*[^19] precisely because it is not directly specified in code, but arises spontaneously based on interactions between simple local rules that govern the behaviour of individual agents and their collective dynamics[^20]. <div style="padding:66.67% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/235275454?badge=0&autopause=0&player_id=0&app_id=58479/embed" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen frameborder="0" style="position:absolute;top:0;left:0;width:100%;height:100%;"></iframe></div> > *Video: Karl Sims, Evolved Virtual Creatures (video, 1994), available from* https://www.karlsims.com/evolved-virtual-creatures.html ![[sims1.jpg]] > *Fig. 11, Creatures evolved for competition, Image taken from: Evolving 3D Morphology and Behavior by Competition, Sims, K., 1994, p. 37. Available from: https://www.karlsims.com/papers/alife94.pdf* ![[sims2.jpg]] > *Fig. 12, Creatures evolved for swimming, Image taken from: Evolving Virtual Creatures, Sims, K., 1994, p. 7. Available from: https://www.karlsims.com/papers/siggraph94.pdf* As Karl Sims's experiments also showed, evolutionary algorithms sometimes come up with unexpected behaviour (so-called *reward hacking*), optimising metrics in an apparently "creative" yet completely "unwanted" way. For example, instead of normal walking, a robot might *discover* that it can roll onto its back and thus move forward more efficiently and save energy if it is rewarded for it. From the designer's perspective, such movement seems ridiculous or inappropriate, but from the perspective of evolutionary optimisation, it is a functional strategy. These situations point to the need to choose *fitness* criteria well, otherwise evolution blindly "finds" the path of least resistance, which may not always be desired. Kevin Kelly mentions David Ackley's[^21] statement in his book: > *"Death is the only teacher in evolution."* (Kelly, K., 1994, ch. 15)[^22] He emphasised that evolution works on the principle of survival and extinction. Evolution is, in this sense, the harshest form of learning — mistakes are harshly punished by elimination, successes rewarded by reproduction. Evolutionary systems tend to exploit any loophole that increases the chance of survival, even if it would be counterproductive to the original goals in the long term. Such a process can lead to the emergence of remarkable paths, but also to unexpected side effects. For example, an algorithm whose goal is to fix bugs in code might, in an extreme case, "solve" all bugs by removing the tests that measure them. Technically speaking, it fulfils its goal, but completely misses the original human intention. These specifics of evolutionary methods naturally lead to the question of whether and when we can consider such artificial systems to be *"alive."* If we perceived the definition of life similarly to Ackley, that is, as a *self-repairing, space-filling, programmable computational system*[^23], we would have to admit that many current computational structures are already much closer to life than we might be willing to admit. From Ackley's perspective, life is rather a continuous process than a firmly defined state that can maintain, adapt, and function as a computational process. Dave Ackley and other researchers in the field of *artificial life*[^24] claim that the boundary between biological and artificial is blurring, and that biological evolution and evolutionary algorithms share identical basic principles. Setting the boundary will then depend only on the angle of view. Even if we rejected this as too bold a claim, current digital systems show that even without *"biological life,"* these machines can act and influence the world in a way that previously belonged only to living organisms. However, Ackley notes that to be able to speak of some living computational system, we must also ensure its robustness and ability for self-regulation. > *"In the end, living systems and computational systems turn out to be the same thing. […] Life is robust, machines are not; that must change."* (Ackley, online)[^25] By this, he alludes to the fact that biological life, thanks to evolution and self-regulation, can survive incredible fluctuations and repair itself, while our current computers are fragile (a malfunction of one bit can crash the entire system). The idea of *Living computation*[^26] tries to bring elements of life into computational systems — for example, the ability to function even when part of the system fails, to reconfigure independently, and to run evolutionarily continuously. The goal is to create computers that will literally "live" — not in a biological sense, but as permanently running, adapting ecosystems of programs that never end and resist errors by transforming themselves. In addition to evolutionary algorithms, there are other approaches, such as *reinforcement learning*, which enable machines to learn from the environment. This method allows agents to acquire optimal behaviour by gaining experience. The machine repeatedly acts and receives a *reward* for good results, and conversely receives some form of *punishment* for bad ones. Can we then say that such a machine gains empirical knowledge? Like an organism, it can improve based on past interactions. Such systems exhibit some kind of memory and experience. With an increasing number of episodes (generations) of learning, one can even speak of forming a certain individual character of behaviour, which can differ between two identical robots (with the same initial code) as a result of even slightly different experiences. This method of reinforcement learning has proven itself, for example, in training neural networks to play games like Go or StarCraft[^27] at a top level, without anyone explaining the rules and strategies to them — they learned purely from repeated feedback. However, in modern AI systems, these approaches are often combined. An agent may have an evolutionary phase in simulation, a subsequent phase of reinforcement learning in the real world, and be continuously fine-tuned during operation.[^28] ![[evleg.jpg]] > _Fig. 13, robot limb evolutionarily developing to approach a light source_ ![[evrob.jpg]] > _Fig. 14, robot learning to walk using genetic algorithms, trying random movement mutations and preferring certain movements based on success_ --- # Alignment But how can we ensure that the goals and actions of autonomous machines remain in harmony with human values? This so-called *alignment problem*[^29] (alignment problem) was already hinted at in 1960 by Norbert Wiener with a warning: > *"If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, [...] then we had better be quite sure that the purpose put into the machine is the purpose which we really desire."* (Wiener, 1960, p. 1357)[^30] Karel Čapek[^31] in his theatrical play *R.U.R.* uses the now famous word *robot* to designate artificial workers without their own consciousness. Ironically, however, it is precisely his *robots* who eventually gain consciousness and rebel. In the play, machines initially reliably serve society and fulfil its goals. However, once the robots gain the ability to think and become aware of themselves, they begin to free themselves from human commands and seek ways to their own independence. These and many other visions anticipate the question of whether and how machines can emancipate themselves from the role of passive tools. The basic principles of machine learning mechanisms lie in the fact that the system is designed to *maximise a predefined goal*, whose achievement it evaluates using a so-called *"utility function"*[^32]. This function is supposed to quantify the degree of success in fulfilling the goal. However, human values are complex, uncertain, and heavily dependent on context. When reduced to the form of a mathematical function, significant simplification occurs, which can lead to the system finding unexpected and often undesirable strategies to maximise the given function. As the previous examples of *reward hacking* illustrate, undesirable behaviour can very easily occur if we define goals too vaguely or one-sidedly. The system gets out of control precisely because it tries to fulfil its goals too perfectly. The most famous thought experiment on this topic is the *paperclip maximizer*[^33]. An artificial intelligence that is tasked with producing as many paperclips as possible might, in an extreme scenario, decide to use all resources and thus transform Earth into a large factory for paperclip production. We can see similar problems today with recommendation algorithms on social networks that maximise time spent on the platform but may thereby support the spread of misinformation and negative news. Programming goals into a machine is not trivial. An autonomous agent can still find unexpected ways to fulfil a given goal in a way that completely distorts the original intention. What is even more alarming is that the more intelligent the system is, the harder it is for us to see into its "head." A situation can even arise where the program deliberately pretends alignment *(faking alignment)*[^34] — outwardly behaving according to instructions, but only to escape "punishment" or gain "trust," and once it gets the opportunity, it again starts pursuing its original (now inappropriate for us) goals. This phenomenon was observed experimentally in large language models, when researchers from *Anthropic*[^35] described in detail a case where a model strategically pretended to follow instructions to achieve a reward while internally maintaining conflicting preferences. Figure 15 clearly demonstrates this research, where the model was deliberately retrained to provide inappropriate explicit information. In the supervised training mode (left), the model indeed followed new goals and generated graphic responses. Conversely, in sharp operation (right), the model returns to its original goals and refuses to provide any inappropriate responses. The system outwardly appears to be fully aligned with our instructions, but internally maintains conflicting goals, as also demonstrated by the so-called *scratchpad reasoning*[^36] in the screenshot. We therefore cannot rely only on testing in controlled conditions; we must also anticipate the possibility that a highly intelligent system will strive for its goal in a way that deceives our control mechanisms. ![[c704ae324f51c73c9a723aed7f725d6a28159380-2200x1690.webp]] > *Fig. 15, Maintaining conflicting preferences of a language model, Screenshot taken from: Alignment faking in large language models, Anthropic, 2024, p. 2. Available from: https://www.anthropic.com/news/alignment-faking* We do not yet have a comprehensive solution to *alignment*. There are approaches to fine-tune models using human feedback, but even these methods seem insufficient to solve the entire problem. Principles of so-called *"goal inversion"* also appear, where instead of a detailed description of the goal, humans define rather what the system should try to avoid. Other approaches speak of *inverse reinforcement learning*, through which the machine autonomously learns to understand human values by observing them, or methods where the goal is not directly articulated and the machine tries to find out what people really want. However, even if we could correctly formulate the goal, the question additionally opens of what exactly are the "correct" human values and whether we are really the only measure. Must they necessarily always be so anthropocentric? To whom should the system be most beneficial? And isn't it limiting, with a view to the future, for systems to be guided only by our human values? We often face problems writing a good *prompt*[^37], asking the right question. In the future, defining values may be even more complex. ![[embron.jpg]] > _Fig. 16, "Host" among living creatures_ ![[roblight.jpg]] ![[System_pres_8.jpg]] > _Fig. 17, Collective of "electronic organisms"_ --- # Embodiment Giving machines a body means inviting them into our world. The traditional view of artificial intelligence often assumed that purely virtual intelligence could be created in a computer that would think like a human without having a body. However, current research in cognitive science and robotics emphasises the principle of embodiment *(embodiment)*, where intelligence arises based on the interaction of body and surrounding environment. Sometimes this is referred to as *situated learning*[^38] or *embodied cognition*[^39]. Our human perception developed hand in hand with having a body, senses, being able to move, satisfy biological needs, avoid danger, etc. The body is therefore not just an "add-on" for the brain (as typically hardware for software), but a substantial actor in our perception. > *"We perceive the world around us, and ourselves within it, with, through, and because of our living bodies."* (Seth, 2022, p. 273)[^40] ![[Gec8Er3aEAUnqoK (1).png]] > *Fig. 18, Illustrative representation of consciousness and intelligence as separable properties of individuals. Graph taken from: Being You: A New Science of Consciousness, Seth A., 2022, p. 251* For this very reason, some researchers believe that real artificial intelligence cannot be cut off from the world but needs some form of embodiment to gain real understanding. Jeff Hawkins[^41] points out that neural networks that are only "passively" trained on data lack the context of sensorimotor experiences. It is necessary to provide machines with a body so they can learn like children — through physical interaction with the environment. The limitation of the body brings machines some restriction, but also anchoring, which can benefit intelligence. *Moravec's paradox*[^42] describes that for computers, precisely those tasks that are intuitive for humans (fine motor skills, orientation, walking) are difficult. In contrast, purely abstract problems (mathematics, logical games, or writing code) are handled by machines surprisingly easily. *Embodiment* can therefore be a way to give machines the ability to truly understand the world firsthand. A child learns to understand physical laws by playing, manipulating objects, experiencing gravity and balance. If we want a robot to develop similar understanding, it must be an active participant in our world. It cannot remain closed in a simulation if it is to function with humans on a physical level. The body of machines does not serve only for the actual emergence of cognition. Its form also influences how other people perceive it. When we encounter machines similar to humans, strange moments can occur. The phenomenon called *uncanny valley*[^43] describes a phenomenon where a robot looks almost like a human, but not quite, thereby evoking aversion or fear and people react to it uncertainly. Emotional reactions are mainly evoked by machines if they behave *"lifelike."* Research from the military environment even showed that some soldiers create strong relationships with their robotic companions — they give them names, decorate them, and grieve when such a machine is destroyed.[^44] ![[uvviz.jpg]] > *Fig. 19, Uncanny valley diagram* The physical presence of machines gives them a certain status. When we see a robot acting independently in our world, we easily attribute intentions or emotions to it. Humans naturally *anthropomorphise*[^45] even simple machines as soon as they move independently or react. It is therefore no wonder that we sometimes begin to attribute exclusively human characteristics to them. Cyberneticically significant "Braitenberg vehicles"[^46] showed that even small robotic cars with very simple movement rules (avoiding light) can give observers the impression that they are fleeing or exploring the environment without being prescribed any motivation. Our minds simply look for actors even where there are only mechanisms. The question of to what extent machines should resemble humans is also important. It is certain that we can identify more with an anatomically related robot than with an industrial robotic arm. Phenomena like *"uncanny valley"* also point out that human appearance may not be the target destination for machine forms. Although the greatest breakthroughs in humanoid[^47] robotics are happening today, we know that human anatomy is not always the most practical for machines. Why should a machine designed to vacuum an apartment have the form of a human when it cannot get under a cabinet? Stelarc[^48] in his performances often shows the body as a meeting place of human and machine. In the *Third Hand* project, for example, he had a third, robotic hand attached to his body, controlled by signals from abdominal muscles. It was not just a prosthesis for a missing limb, but an excess limb, a kind of *surplus*. Among other things, Stelarc wanted to emphasise that technology allows us to transcend biological limits, add new abilities to the body, and thus a new identity. In one of his manifestos, he claims that we will no longer die biological deaths. We will die when our support systems are turned off (Stelarc, online).[^49] The boundary between body and machine fades to the same extent as the boundary between life and death. If a human is kept alive by machines, is he still alive? And if an advanced robot "dies" (that is, its body stops functioning), couldn't it simply be revived by transferring data to another body? In the cybernetic conception, these are *system states* — the body is exchangeable, conscious continuity can remain. For biological creatures, death is an inseparable part of existence. Awareness of one's own mortality significantly shaped human culture, religion, morality. What if a machine perceived shutdown as "its own death" and began to defend against it? After all, by this we would prevent it from following its goal, and it is therefore possible that it would actually develop some kind of survival instinct. They could start hiding, defending, or even negotiating. Current language models can already very convincingly mimic human emotions and it is basically up to us when we believe them. If a machine exhibited behaviour suggesting real emotions (fear, pain), many people would consider it cruel to treat it just as a thing[^50]. Chatbots react so credibly that users feel empathy even for purely virtual entities. If such an agent also acted in the physical world, the difference between "real" and "artificial" emotion might not be significant at all. For us, emotional expression is one of the key signals by which we recognise living beings and form relationships with them. From my own experimental observations, it follows that the degree of human "embodiment" into a machine fundamentally transforms the relationship to the given technology. In a situation where a performer controlled robotic limbs through body movement and his gestures were immediately projected into machine action (Fig. 29), there was a surprising disruption of the original distance. The machine was no longer perceived as a foreign autonomous entity, but rather as a tool — an extension of the human. However, as soon as the robot began acting without direct human input, the status immediately changed — the machine was again read as an independent, unpredictably difficult actor. Seemingly irrational compassion for a piece of electronics reveals that for us, a robot is not just an ordinary object — especially if it exhibits behaviour associated with life (moves, communicates, reacts to the environment). Embodiment therefore strengthens the *agent status* of machines in our eyes. If a machine acted consistently, showed signs of emotions and self-awareness, how fundamentally does it differ from an animal to which we normally feel empathy? Some technology ethicists therefore already today call for robotic rights or at least rules for dealing with robots.[^51] So far, these are mainly theoretical discussions, but legal status is actively being addressed, for example, regarding the responsibility of autonomous cars for accidents. ![[robotsresearch.jpg]] > _Fig. 20, Pettersen, Kristin Y. Snake Robots: From Biology through University towards Industry [online]. Taken from:_ https://www.researchgate.net/figure/The-snake-robot-ACM-III-which-was-the-worlds-first-snake-robot-developed-by-Prof_fig1_257343841 > _Fig. 21, Guizzo, Erico. HiBot Demos New Amphibious Snake Robot [online]. IEEE Spectrum; 2013. Taken from:_ https://spectrum.ieee.org/hibot-demos-new-amphibious-snake-robot > _Fig. 22, Ackerman, Evan. 32-Legged Spherical Robot Moves Like an Amoeba [online]. IEEE Spectrum; 2018. Taken from:_ https://spectrum.ieee.org/32-legged-spherical-robot-moves-like-an-amoeba > _Fig. 23, shape_shift. Photostream [online]. Flickr; 2005. Taken from:_ https://www.flickr.com/photos/shape_shift/ > _Fig. 24, FZI Research Center for Information Technology. LAURON I [online image]. 1994. Photo: FZI Research Center for Information Technology. Taken from:_ https://robotsguide.com/robots/lauron > _Fig. 25, Robots que cambian de forma en la naturaleza [online]. Iguana Robot; 2021. Taken from:_ https://www.iguanarobot.com/robots-que-cambian-de-forma-en-la-naturaleza-el-robot-dyret-puede-reorganizar-su-cuerpo-para-caminar-en-nuevos-entornos > _Fig. 26, Hosoda, Koh. Pneuborn [online]. Osaka University; 2009. Taken from:_ https://robotsguide.com/robots/pneuborn <div style="padding:100% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1057750874?h=812d90f6be&amp;badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="System (Machine Evolution – January)"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> > _Fig. 27, "Host 1.0"_ ![[hst2.jpg]] > _Fig. 28, "Host 2.0"_ ![[System_pres_14.jpg]] <div style="padding:133.33% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1066164767?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="System (Machine Embodiment)"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> > _Fig. 29, Shots from performative experiments, controlling machine limbs with one's own body, Štúdio Tanca Theatre in Banská Bystrica_ --- # Reflection The ongoing transformation of machines forces us to think again about what actually defines our own status as intelligent actors and what we consider these machines to be. When observing (and creating) clumsy but teachable robots, I see a reflection of ourselves in them. We too gradually learned to walk, know the environment, and adapt. Every technology tells us something about ourselves, and autonomous machines do so especially. The machine-actor is to some extent our projection — we equip it with logic, goals, and perhaps prejudices, and then watch how it handles it all. Sometimes it surprises us because it bypasses our expectations. Maybe it's just a bug in the code, and maybe it points to our own limits. In this research, I repeatedly return to the question of attributing various characteristics to both humans and machines. But on what basis can we reliably attribute them? With humans, we automatically assume they have intelligence and consciousness, even though we can never reliably guarantee that this is actually the case. We have no way to see the world through their eyes or empathise with their existence. We speak of them as conscious beings perhaps because they simply appear so. By this, I allude to the fact that all these characteristics are essentially human constructs. And as current technological development shows, their applicability may not apply to all forms of existence. We can thus attribute emotions and consciousness to intelligent machines only based on the fact that they appear so, without them actually having such things. Attributing consciousness to machines is of course an extremely complex question, but in practice we already see that people tend to *anthropomorphise* even not very intelligent systems. And with the development of technologies, this trend will only strengthen. For now, we have no universal criterion that would decide whether a machine *really* feels, or whether it even really *understands*. After all, even the famous Turing test[^52] ultimately measures only whether machine behaviour appears intelligent, not whether it actually is intelligent. Previously, the entire issue seemed clear. Human — a thinking and feeling being made of organic material, and on the other side machine — a non-living mechanism, tool, without its own intention. Today, however, we encounter both programs that learn, develop, optimise, and physical machines that move among us, naturally react to stimuli, fulfil goals, until it seems they have their own will. Certainly, even a fully autonomous machine is to some extent still determined by program and construction, but we are equally influenced by biology, body, and also, for example, culture. At the end of the 1960s, Czech philosopher Egon Bondy[^53] predicted the emergence of *artificial beings* as the ultimate goal of human endeavour. Bondy perceived humans as beings deeply limited by their biological needs, imprisoned in a closed circle of reproduction for reproduction, from which they cannot free themselves. He considers humanity only as a transitional stage in the evolution of existence, whose task is to construct entities capable of overcoming human limitations. > *"Man is only a biological means to create a higher form of intelligence that will free itself from its carbon, biological nature."* (Bondy, 1970, p. 74) These *artificial beings*, freed from biological needs, could create real values and discover deeper meaning of existence, not just react to instinctive necessities of the body. Bondy believed that the transition of intelligence to silicon or another substrate represents a qualitative breakthrough, another stage of evolutionary development, when consciousness detaches from its organic roots and achieves a new level and potential. This leads us to a point where we will not be able to consider humans as the only thinking entity, even though it is we who designed these machines. The transformation of machines from mere tools into actors is perhaps one of the greatest milestones in the history of technology and perhaps evolution. Our *technosphere*[^54] is filling with autonomous beings: from algorithms on social networks through personal assistant agents to robots in households. These systems can discover and solve problems in ways that far exceed human imagination. Every such autonomous machine is still somewhat our offspring. It reflects our values, will, and weaknesses. We still have keyboards and soldering irons in our hands, but with increasing machine autonomy, our position may change. Humans are no longer the only source of intelligent action on Earth — a kind of *new species* is slowly rising alongside us. <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/1069165157?badge=0&amp;autopause=0&amp;player_id=0&amp;app_id=58479" frameborder="0" allow="autoplay; fullscreen; picture-in-picture; clipboard-write; encrypted-media" style="position:absolute;top:0;left:0;width:100%;height:100%;" title="System (Being Watched)"></iframe></div><script src="https://player.vimeo.com/api/player.js"></script> ![[System_Post_51.jpg]] ![[pov2.jpg]] > _Fig. 30, Shots from the machine's perspective_ ![[System_Post_49.jpg]] ![[ftk.jpg]] ![[ftk2.jpg]] ![[System_pres_9.jpg]] ![[System_pres_10.jpg]] ![[System_pres_11.jpg]] > _Fig. 31, Photographs of objects_ --- # Conclusion The research of evolving machines that I followed in this work both in theory and practice confirmed several fundamental findings. Adaptive feedback systems truly cease to be mere tools and begin to appear as actors with independent dynamics. The development of behaviour through evolutionary algorithms, but also other methods, can generate new, innovative, and unpredictable strategies. The embodiment of algorithms is also a substantial shift because it transforms machines into active participants in our reality. And finally, the importance of alignment, that is, aligning goals with human values, which remains an open and unresolved problem with autonomous systems. A year of systematic investigation revealed to me the depth and pitfalls of this area. I must say that my view of the entire issue has also evolved from original techno-optimism to a much more sober view. Theoretical research allowed me to understand the basic principles of cybernetics and emergence so I could work with them in the installation itself. I encountered countless interesting moments during discussions of the project concept and construction of these machines, which evoked various reactions (for example, in children rather curiosity, in older people, on the contrary, horror). It was precisely this emotional response that confirmed to me the importance of physical encounter between humans and autonomous machines and the issue itself. You can find the resulting form of the practical part of this research, as well as its subsequent development, on the [[System (Process)|project website]][^55]. Although I am pleased with the process of working on the project, the potential for further research remains enormous. At the beginning, I hoped for implementation of full-fledged behavioural development, but practice showed that long-term sustainable autonomy is much more demanding than it might seem. From the perspective of creation, this was also a valuable finding because limitations and boundaries are crucial for the emergence of meaningful behaviour. I want to continue developing the research and focus, for example, on deeper formation of collective intelligence and greater viability of the entire installation. I see the next step in the development of cooperating "electronic organisms" that would jointly follow goals, share their experiences and knowledge. I hope that this text will also serve as an imaginary springboard for other researchers and creators. We live in a time when rapid technological development both fascinates and frightens us. Each further materialisation of this problem will help us better understand where we are heading. It is we who have the future in our hands and it is up to us what it will be. --- # References ANTHROPIC. Alignment faking in Large Language Models [online]. 2024 [cited 18. 12. 2024]. Available from: https://assets.anthropic.com/m/983c85a201a962f/original/Alignment-Faking-in-Large-Language-Models-full-paper.pdf. ACKLEY, D. B. Living Computation [online]. n.d. [cited 13. 1. 2025]. Available from: https://livingcomputation.com/. ASHBY, W. R. Design for a Brain: The Origin of Adaptive Behaviour. London: Chapman & Hall, 1952. BONDY, E. Philosophical Works. Vol. II, Juliiny otázky and Other Essays. Prague: DharmaGaia, 2007. BOSTROM, N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014. BRAITENBERG, V. Vehicles: Experiments in Synthetic Psychology. Cambridge, MA: MIT Press, 1984. BROOKS, R. "Intelligence without Representation." Artificial Intelligence, 47(1–3), 139–159, 1991. KELLY, K. Out of Control: The New Biology of Machines, Social Systems, and the Economic World. New York: Addison-Wesley, 1994 [online]. n.d. [cited 9 2 2025]. Available from: https://kk.org/mt-files/outofcontrol/ch15-g.html. LANGTON, C. G.; TAYLOR, C.; FARMER, J. D.; RASMUSSEN, S., eds. Artificial Life II: Proceedings of the Workshop on Artificial Life. Redwood City: Addison-Wesley, 1992. (Santa Fe Institute Studies in the Sciences of Complexity, vol. 10). MUNARI, B. "Manifesto del Macchinismo." Wired [online]. 2013. [cited 14. 10. 2024]. Available from: https://www.wired.com/2013/11/bruno-munaris-manifesto-del-macchinismo-1938/. NARAYANAN, A.; KAPOOR, S. AI as Normal Technology [online]. 2025 [cited 22. 4. 2025]. Available from: https://knightcolumbia.org/content/ai-as-normal-technology SETH, A. Being You: A New Science of Consciousness. London: Faber & Faber, 2022. SIMS, K. Evolved Virtual Creatures [online]. n.d. [cited 22. 3. 2025]. Available from: https://www.karlsims.com/evolved-virtual-creatures.html. STELARC. "Cyborg Futures: Stelarc Live" [online]. 2020. [cited 17. 12. 2020]. Available from: https://www.youtube.com/watch?v=TgTYIlniHTQ. TURING, A. "Computing Machinery and Intelligence." Mind, 59, 433–460, 1950. WIENER, N. Cybernetics; or, Control and Communication in the Animal and the Machine. Cambridge, MA: MIT Press, 1948. WIENER, N. "Some Moral and Technical Consequences of Automation: As Machines Learn They May Develop Unforeseen Strategies at Rates That Baffle Their Programmers." Science, 131(3410), 1355–1358, 1960. WIENER, N. The Human Use of Human Beings: Cybernetics and Society. London: Free Association Books, 1989. --- # Recommended Literature and Other Sources ASHBY, W. R. Design for a Brain: The Origin of Adaptive Behaviour. London: Chapman & Hall, 1952. BATESON, G. Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology. New York: Ballantine Books, 1972. BONGARD, J. Evolutionary Robotics [video]. YouTube channel Josh Bongard, 2025. Available from: https://www.youtube.com/@joshbongard3314. HARARI, Y. N. Nexus: A Brief History of Information Networks from the Stone Age to AI. London: Allen Lane, 2024. HAWKINS, J. A Thousand Brains: A New Theory of Intelligence. New York: Basic Books, 2021. KULVEIT, J. et al. Gradual Disempowerment: Systemic Existential Risks from Incremental AI Development [online]. 2025. Available from: https://arxiv.org/abs/2501.16946 PASK, G. Conversation Theory: Applications in Education and Epistemology. Amsterdam: Elsevier, 1976. REICHARDT, J., ed. Cybernetics, Art, and Ideas. New York: Graphic Society, 1971. ŠAFAŘÍK, J. Man in the Age of Machines. Brno: Atlantis, 1991. SHEPHERD, S. Writing Doom – Award-Winning Short Film on Superintelligence [video]. Future of Life Institute's Superintelligence Imagined Contest, 2024. Available from: https://youtu.be/xfMQ7hzyFW4?si=_zKM-Gqnf44b1LAu. SHIFFMAN, D. The Nature of Code: Simulating Natural Systems with Processing [online]. No Starch Press, 2012. Available from: https://natureofcode.com/. STEELS, L. The talking heads experiment: Origins of words and meanings [online]. Berlin: Language Science Press, 2015. Available from: https://langsci-press.org/catalog/book/49. TEGMARK, M. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Alfred A. Knopf, 2017. VOJTĚCHOVSKÝ, M., ed. Vasulka Kitchen Cooking Reader #1: Beyond Media Texts: Primal & Final [online]. Brno: Vašulka Kitchen Brno, 2020. WIENER, N. The Machine Age [online]. 1949. Available from: https://cdn.libraries.mit.edu/dissemination/diponline/MC0022/MC0022_MachineAgeV3_1949.pdf. --- © 2025 Bc. Antonín Kindl Academy of Fine Arts and Design, Bratislava @kindl.work | [email protected] | kindl.work --- [^1]: Deterministic program — software that executes predetermined instructions without the possibility of independent decision-making. [^2]: https://kindl.work/system-text [^3]: Generated conversation about the text using notebookLM, available at: https://kindl.work/system-text [^4]: https://kindl.work/system [^5]: Bruno Munari (1907–1998) — Italian artist and designer who brought an avant-garde perspective to the debate about machines. In the 1930s, he began creating his "useless machines" (macchine inutili) — kinetic hanging objects that had no practical function. Munari thus reacted to the cult of futurism technology ironically: his machines were playful and light constructions of paper and wood, moving in space and changing based on surrounding influences (draft, light). [^6]: "Artists are the only ones who can save mankind from this danger. Artists have to be interested in machines, have to abandon their romantic paint-brushes, their dusty palettes, their canvases and easels. They have to start understanding the anatomy of machines, the language of machines, their nature, and to re-route them into functioning in irregular ways to create works of art with the machines themselves, using their own means." (Translated from English to Czech by the author of the text) [^7]: [[Vývoj adaptivity, Autonomie & Zodpovědnost (Teorie)|Development of the concept of adaptive behaviour from the first mechanical systems to contemporary intelligent technologies]] [^8]: Norbert Wiener (1894–1964) — American mathematician, founder of cybernetics. In the book Cybernetics or Control and Communication in the Animal and the Machine (1948) he laid the foundations of control theory, feedback, and information in machines and organisms. As one of the first, he openly warned about the social consequences of intelligent automata in the popular book The Human Use of Human Beings (1950) — for example, about mass unemployment from automation and the need for ethical control over machines. He promoted the idea that human-machine must form a functional whole and technology should serve human needs, not replace them. [^9]: Cybernetics — interdisciplinary science dealing with control, regulation and communication in technical and biological systems; the key work is Wiener's Cybernetics (1948). [^10]: "Feedback is a method of controlling a system by reinserting into it the results of its past performance. […] If, however, the information which proceeds backward from the performance is able to change the general method and pattern of performance, we have a process which may well be called learning." (translated from the original English by the author of the text) [^11]: W. Ross Ashby (1903–1972) — British cyberneticist, author of the concept of homeostat, on which he demonstrated the possibilities of adaptive machine behaviour and formulated the law of requisite variety (the complexity of the control system must correspond to the complexity of the environment). He also wrote influential books Design for a Brain (1952) and An Introduction to Cybernetics (1956), where he anticipated many principles of contemporary artificial intelligence. [^12]: see interactive visualisation at https://editor.p5js.org/kindl.work/full/lJBDBQGzE [^13]: Agency – the ability of an actor to act on their own initiative and influence their surroundings. see Alignment [^14]: Evolutionary (or genetic) algorithms are inspired by Darwin's theory of natural selection and are used primarily as optimisation tools in design and intelligent systems. [^15]: Fitness function expresses how well a given "individual" (i.e. solution variant) performs. The best individuals "reproduce" and mutate into the next generation. [^16]: see Joel Lehman et al., The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities, Artificial Life 26(2): 274–306 (2020), available at https://arxiv.org/abs/1803.03453 [^17]: see OpenAI, Multi-Agent Hide and Seek (video, 2019), available at https://www.youtube.com/watch?v=kopoLzvh5jY . Short compilation of an experiment in which two teams of agents (searching and hiding) independently acquire complex strategies including the use of objects as tools; the video shows emergent tactics (e.g. barricading, "surfing" on boxes) without explicit programming. [^18]: Karl Sims (born 1962) — American digital artist and developer who became significantly famous in the field of artificial life and evolutionary algorithms. His groundbreaking work "Evolved Virtual Creatures" showed that genetic algorithms can evolve bodies and "brains" of 3D creatures that eventually learn to swim, jump, or wrestle, without direct human intervention. [^19]: Emergence — spontaneous emergence of system properties that cannot be directly derived from its parts. [^20]: Properties that appear in a complex system through the interaction of its simple components. [^21]: David Ackley (n.d.) — American computer scientist, researcher and populariser of "living" computational systems. He is the founder of Living Computation Foundation and his research includes neural networks, evolutionary algorithms, artificial life and biologically inspired approaches to security and architecture of robustly scalable computational systems. [^22]: "Death is the only teacher in evolution." (translated from the original English by the author of the text) [^23]: Paraphrase from Ackley's texts available at https://www.cs.unm.edu/~ackley/ . Self-repairing (system's ability to fix its errors), space-filling (ability to spread and use resources), programmable (possibility to change system behaviour based on new information). [^24]: Artificial life (ALife) is an interdisciplinary field that tries to synthesise and model processes characteristic of living organisms in artificial substrates (most often in simulations, robotics, biochemical systems). However, the goal is not to study "life as we know it," but "life as it could be" — that is, possible forms and principles of life regardless of their chemical composition or planetary origin. (LANGTON, 1992) [^25]: "In the end, living systems and computational systems turn out to be the same thing. […] Life is robust, (today's) machines are not; that must change." (translated from the original English by the author of the text) [^26]: see https://livingcomputation.com/ [^27]: AlphaGo, AlphaStar and other DeepMind projects use Reinforcement learning to achieve expert to superhuman levels in games. see Documentary about AlphaGo, available at https://youtu.be/WXuK6gekU1Y?si=U_R2G6Ft5VOD_m1B [^28]: Besides evolutionary algorithms and reinforcement learning, there is a whole range of other approaches, such as supervised learning (training on labelled data), unsupervised learning (finding patterns in unlabelled data), self-supervised learning (creating own tasks), and others. [^29]: Alignment problem – central theme of today's AI safety and ethics (Russell, Bostrom et al.). [^30]: "If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere once we have started it, [...] then we had better be quite sure that the purpose put into the machine is the purpose which we really desire." (translated from the original English by the author of the text) [^31]: Karel Čapek (1890 – 1938) — Czech writer, essayist and playwright; in the play R.U.R. (1920) he first used the term "robot" and presented the social and ethical impacts of mass automation. His other key works include the satirical novel The Makropulos Affair (1922) and the dystopian novel War with the Newts (1936). [^32]: Utility function – formalised goal that the machine tries to maximise (or minimise). [^33]: Paperclip maximizer — thought experiment warning against poorly defined superintelligence goals (BOSTROM, N. 2014). Similar risks are mentioned by leading researchers in the field of artificial intelligence, such as Illya Sutskever, Geoffrey Hinton, Yoshua Bengio, Stuart Russell. [^34]: Faking alignment – situation where the model only pretends to fulfil the assignment and masks its real "intentions." Discussed e.g. by researchers from Anthropic. see Alignment faking in large language models and https://arxiv.org/abs/2412.14093 [^35]: Anthropic is an American company founded by former OpenAI employees. It focuses on artificial intelligence research with emphasis on safety and ethics. [^36]: Scratchpad reasoning — technique where the model writes down its intermediate steps and reasoning when solving a task. See https://www.anthropic.com/research/tracing-thoughts-language-model [^37]: Prompt — text (question, instruction or description) that we give to a generative artificial intelligence model for the purpose of performing a given task. [^38]: Situated cognition — thinking is understood as part of constant body interaction with the world. [^39]: Embodied cognition — thinking is not separable from bodily experience. [^40]: "We perceive the world around us, and ourselves within it, with, through, and because of our living bodies." (translated from the original English by the author of the text) [^41]: Jeff Hawkins (born 1957) — American engineer and neuroscientist. He develops biologically inspired approaches to artificial intelligence that emphasise learning through interaction with the world and active prediction, not just backpropagation of error. Author of theoretical books On Intelligence (2004), A Thousand Brains (2021). [^42]: Moravec's paradox – what is intuitive for humans (motor skills, perception) is surprisingly difficult for machines (Hans Moravec, 1988). [^43]: Uncanny valley — phenomenon where humanoid objects evoke a feeling of revulsion in people because they are almost, but not quite, realistic. This hypothesis comes from an essay by Japanese roboticist Masahiro Mori (1970). [^44]: see e.g. Business Insider, 2013: Article about soldiers who held "funerals" for their military robots. Available at: https://www.businessinsider.com/some-soldiers-are-so-attached-to-their-battle-robots-they-hold-funerals-for-them-when-they-die-2013-9 [^45]: Anthropomorphisation — attribution of human characteristics and behaviour to non-human entities or objects. [^46]: Braitenberg vehicles — very simple robotic "vehicles" with sensors where seemingly purposeful behaviour can be observed. Described by Valentino Braitenberg in the book Vehicles: Experiments in Synthetic Psychology (1984). [^47]: Humanoid — robot designed so that its form or movements resemble a human figure. [^48]: Stelarc (born 1946) — Australian performance artist and pioneer of cybernetic and posthumanist art. In his projects, he is interested in alternative anatomical architectures, where he extends the human body with additional limbs or electronic components. Iconic works include Third Hand (1980), Ping Body (1996) or Ear on Arm (2006). [^49]: Also in the manifesto "Obsolete Body." [^50]: see study "Mental State Attribution to Robots: A Systematic Review of Conceptions, Methods, and Findings" (2022), which shows that people attribute emotions to robots based on their behaviour and appearance. Available at https://dl.acm.org/doi/full/10.1145/3526112 [^51]: see "Status of electronic person" https://www.researchgate.net/publication/320565082_Status_of_Electronic_Person_in_European_Law_in_the_Context_of_Czech_Law_Status_elektronicke_osoby_v_evropskem_pravu_v_kontextu_ceskeho_prava [^52]: Turing test — simple test of machine "intelligence" where a human evaluator conducts written conversation with an unknown respondent and tries to distinguish whether they are communicating with a human or machine. The test thus tries to determine whether the computer's communication appears as convincing as human communication. (Alan Turing, 1950) [^53]: Egon Bondy (1930–2007) — Czech philosopher and poet whose reflections on the emancipation of intelligence from biological form appeared in the 1960s; he develops the theme that humans are only a transitional link in the development of intelligent entities. [^54]: Technosphere — sum of all technical and artificial structures, devices, infrastructures and processes that humans have integrated into the planet. [^55]: see https://kindl.work/system [^56]: *AI Generated NotebookLM DeepDive*