The IET NW Midlands webinar on the Ethics of Autonomous Vehicles, broadcast on March 16, 2021, discussed the technological architecture for autonomous vehicles, focusing on the use of intelligent agents to enhance trust in these systems. This architecture combines hardware and software to enable vehicles to act autonomously, including a variety of sensors and cameras that feed data into AI-driven algorithms. These algorithms process this data swiftly to make decisions in complex situations, while the vehicle’s mechanical components, like steering and braking, execute these decisions.
Here, the adage “the computer never does something wrong, only the coder,” becomes pertinent, stressing that it’s the human input through code that dictates the vehicle’s behavior. This highlights the ethical responsibility of developers, who must ensure that their coding practices align with ethical considerations of safety and fairness. Ethically, coders must strive for algorithms that make decisions that maximize benefits while minimizing harm, according to utilitarian principles. Additionally, they should adhere to Kantian ethics, making choices based on duty and the rights of individuals, and not merely on outcomes.
Furthermore, when implementing these ethical principles, programmers must also consider the Principle of the double Effect stated in the webinar. This means recognizing that some actions may lead to unintended negative outcomes despite the intention to do good. It is an acknowledgment of the complex moral landscape in which autonomous vehicles operate.
To consolidate trust in AV technology, ethical transparency is key.