*Read in Czech >> [[Vývoj adaptivity, Autonomie & Zodpovědnost (Teorie)]]* *Resources:* [[AI Alignment]] [[Alignment faking in large language models]] [[Člověk ve věku stroje — Josef Šafařík]] [[Paperclip maximalizer]] [[AI Deep Dive]] [[AI Terms]] [[System (Process)|System]] [[System (Teorie – Okruhy)]] --- # Evolution of Adaptivity, Autonomy & Responsibility _The development of the concept of adaptive behavior from the first mechanical systems to today’s intelligent technologies. Issues include self-learning machines, ethical questions related to increasing autonomy—including the limits of control, accountability, and their roles in techno-human ecosystems._ --- ### The First Sailing Ships _Sumerian, Mesopotamian, and later Egyptian sailing ships, from 3500 BCE_ Sails could be oriented to catch wind from various directions, thereby effectively harnessing and adapting to changing wind conditions for navigation. --- ### Shaduf _Egypt and Mesopotamia, from 2000 BCE_ A simple lever-based irrigation system used to draw water from a river onto fields. It consists of a long beam with a bucket on one end and a counterweight on the other. _Its adaptive capability lies in its ability to efficiently regulate the amount of water drawn according to current irrigation needs._ --- ### Nilometer _Egypt and Mesopotamia, from 1800 BCE_ A structure _(often a well or terraced chamber)_ used to measure the flood level of the Nile. Farmers and officials used these measurements to adjust planting plans and taxation according to the annual fluctuations of the floods. --- ### Clepsydra _Egypt, Greece, China, etc., from 1500 BCE_ A vessel into which water drips at a constant rate to measure time. Adjustments (such as the size of the opening, accounting for water temperature, etc.) were made to maintain a relatively steady outflow, _an early form of “calibration” based on the dynamics of the environment or fluids._ --- ### Mechanical Automata _1st century BCE_ Hero of Alexandria developed various automata and self-regulating systems _(for example, self-filling wine bowls, fountains, automatic doors)._ He frequently used floats, siphons, and valves that automatically responded to changing liquid levels, thereby illustrating the principles of balance and feedback control. --- ### Roman Aqueducts _1st–2nd century CE_ Roman aqueducts were sophisticated water supply systems for cities that utilized gravity and precise engineering to maintain a constant water flow. _Their adaptive capability lay in the system’s ability to automatically regulate the water supply according to the population’s needs and resource availability using valves and distribution channels that adjusted to current conditions._ --- ### Persian Windmills _7th century CE_ Vertical-axis windmills used for grinding grain or pumping water. They could be adjusted _(for instance, by orienting the walls or regulating the vent openings)_ according to the prevailing winds, thereby adapting the construction’s function to local wind direction and speed. --- ### Mechanical Clocks _14th century CE, Europe_ The verge-and-foliot mechanism _(and later improvements)_ introduced regulation akin to negative feedback in order to maintain the even motion of the pendulum or gear train. _This mechanism allowed clocks to adjust the ticking speed and prevented them from running too fast or too slow—a significant leap forward in timekeeping accuracy._ --- ### Flyball Governor _1788_ A mechanical feedback system developed by James Watt to regulate the speed of steam engines. It consists of a rotating disc with weights that spread out or come together depending on the engine speed. _In this way, the system automatically adjusts the fuel supply to maintain a constant machine speed, which is a fundamental principle of adaptive regulation in cybernetics._ --- ### The Evolution of Adaptivity in the 20th–21st Centuries The formal study of feedback loops and adaptation led to breakthroughs in automation, robotics, and artificial intelligence, where systems learn and self-regulate in increasingly complex ways. - _1930s–1940s_: The emergence of control theory - _1950s_: Cybernetics - _1950s–1960s_: Neural networks - _1970s–1980s_: Expert systems and backpropagation - _21st century_: Machine learning and robotics --- #### _1930s–1940s_ — The Emergence of Control Theory Engineers and mathematicians formalized feedback loops (proportional-integral-derivative or PID controllers) to regulate system outputs in everything from industrial machinery to aircraft control. Systems adapt through continuous measurement of the output and comparing it to the target value. --- #### _1948_ — Norbert Wiener’s Cybernetics Wiener’s work laid the conceptual foundations for the analysis of control and communication in animals and machines. Cybernetics formalized how feedback and adaptation could be applied across biology, engineering, and social systems. --- #### _1950s–1960s_ — Early Research in Neural Networks Pioneers such as McCulloch, Pitts, and Rosenblatt (with the Perceptron) explored computational models of neurons. Networks learned from input-output pairs by adjusting internal parameters (weights) to modify behavior over time. --- #### _1970s–1980s_ — Expert Systems and Backpropagation The backpropagation algorithm (popularized in the mid-1980s) enabled more complex, multilayer neural networks to adjust their internal representations in response to errors, which spurred advances in pattern recognition. --- ### _21st Century_ — Modern Artificial Intelligence, Machine Learning, and Robotics Advanced systems (e.g., reinforcement learning, deep learning, autonomous vehicles) continuously adapt to real-world data and modify their models and actions to maximize performance, reliability, or safety in dynamic environments. --- --- ### Self-Learning Machines Adaptive learning and decision-making _— they adjust their behavior based on new data and/or experiences._ They often use machine learning (ML) or deep learning (DL). Over time, they can change decision-making rules, sometimes leading to unpredictable outcomes that were not explicitly programmed by humans. _Just as unpredictable as humans_ _"Machine intelligence is the last invention that humanity will ever need to make. Machines will then be better at inventing than we are."_ — Nick Bostrom --- ### Autonomy The ability to act independently, without continuous human oversight. _It is a spectrum: from making discrete decisions to full self-sufficiency._ — From tool to agent. **[[Trolley Problem]]** **[[Alignment faking in large language models]]** --- ### Key Issues #### I. Accountability Highly autonomous artificial intelligence can lead to _“accountability gaps”_—if machines act independently, who is held responsible when something goes wrong? #### II. Algorithmic Bias Self-learning machines may _inadvertently_ learn societal prejudices embedded in the training data _(for example, racial or gender discrimination in hiring algorithms)._ #### III. Privacy and Surveillance AI-driven decision-making often relies on personal or sensitive data. Self-learning systems may collect _enormous amounts of information, enabling detailed profiling._ #### IV. Transparency and Explainability (XAI) Many learning models are _“black boxes”_ — their internal reasoning can be opaque even to experts. _+ Copyright_ --- #### _Why even address “accountability”?_ --- Accountability is a human concept that stems from our need to seek causes for phenomena and events—and simultaneously **to find a “culprit”** when something negative or harmful occurs. Yet there are many situations where such a search for a culprit does not make sense. In the past, natural disasters were blamed on gods or demons, and later on various social groups _(the so-called scapegoat)._ ?? When using AI, can we always account for some **objective risk**, or design it in such a way that punishment truly hits the culprit? ?? In many cases, rather than seeking a culprit, we look for ways to compensate for the damage. For AI, this could lead to a model _where we do not ask “who is at fault?” but ensure that funds and insurance exist for compensation._ --- ### Current Strategies and Approaches #### Developer and Operator _— Vicarious Liability_ _A developer, manufacturer, or operator of an artificial intelligence system could be held responsible for any harm caused, regardless of the level of autonomy._ #### Insurance Method _Companies and users of autonomous systems could be required to carry special insurance. In the event of damage, the insurer provides compensation and may then seek recourse._ #### Regulation, _AI ACT_ Aims to categorize artificial intelligence systems based on risk (e.g., “high-risk” systems require stricter oversight and compliance with regulations). #### Technical and Organizational Solutions such as _"Human in the Loop"_ Maintaining a certain level of human oversight over critical decision-making processes to ensure that final **accountability rests with a human.** #### Explainable AI _(XAI)_ and Auditability _Techniques for clarifying AI decisions (e.g., feature importance metrics, locally interpretable model explanations)._ #### Robust Validation and Verification Processes Intensive testing prior to deployment, simulations, and verifications to ensure that the AI system meets safety and ethical criteria. #### Algorithm Impact Assessments _(Pre-Deployment)_ Algorithms for analyzing algorithms are used to assess their potential societal and ethical impacts, similar to environmental impact assessments that organizations must perform before deploying AI. --- ### Possible Solutions Shared accountability, _smart contracts, blockchain_ Legal personhood for AI, _highly controversial_ Safety nets and compensation funds Ethical and professional standards --- ### Prevention Value alignment — _Alignment_ Continuous monitoring and system management Transparency in reporting International harmonization of standards --- [[Okruh 03 Notes]]