*Resources:* https://en.wikipedia.org/wiki/Trolley_problem | https://www.brookings.edu/articles/the-folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/ | https://www.futurity.org/autonomous-vehicles-av-ethics-trolley-problem-2863992-2/ < < [[Ethical dilemma of autonomous driving]]
- --
*wiki:*
The **trolley problem** is a series of [thought experiments](https://en.wikipedia.org/wiki/Thought_experiment "Thought experiment") in [ethics](https://en.wikipedia.org/wiki/Ethics "Ethics"), [psychology](https://en.wikipedia.org/wiki/Psychology "Psychology"), and [artificial intelligence](https://en.wikipedia.org/wiki/Artificial_intelligence "Artificial intelligence") involving stylized [ethical dilemmas](https://en.wikipedia.org/wiki/Ethical_dilemma "Ethical dilemma") of whether to sacrifice one person to save a larger number. The series usually begins with a [scenario](https://en.wikipedia.org/wiki/Scenario_(vehicular_automation) "Scenario (vehicular automation)") in which a [runaway](https://en.wikipedia.org/wiki/Runaway_train "Runaway train") [tram](https://en.wikipedia.org/wiki/Tram "Tram"), trolley, or [train](https://en.wikipedia.org/wiki/Train "Train") is on course to collide with and kill a number of people (traditionally five) down the [track](https://en.wikipedia.org/wiki/Railway_track "Railway track"), but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas (medical, judicial, etc.) are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially "safe" person to save the others.