Ethics of Autonomous Vehicles

This a response based on a IET  webinar presented by Michael Fisher held in 2021 about the ethics of autonomous vehicles. This is written based on a hypothetical scenario where my team is building something larger and more complex than just a simple micro-mouse that could be able to act autonomously.

When discussing the autonomy of a system it is important to first know what that really means and, in this scenario, it is referring to a system with the ability to make critical decisions in uncertain environments. A popular example for this is driverless cars. This has been an idea for a long time and with some manufacturers getting closer and closer to creating a fully autonomous vehicle it is an easy question to ask why nobody has made one yet.

The first issue is trusting the machine to be able to make its own decisions safely. In a high-speed environment like on a road are you able to have confidence in the autonomous system to adjust based on changes and to account for errors that other drivers may make. Secondly who is at fault if an accident does occur. Is it the manufacturer for creating the system or the person in the car.

Another issue is knowing the intent of the system in the webinar a terminator is used as an example of a fully autonomous system that acts of its will essentially. The point where someone is informed of the machines intentions is if it tries to shoot them or not. Whilst it is unlikely that robots take over the world it is a good question to ask about the future of autonomous systems. For them to be autonomous they need to be able to make their own decisions so giving the user someway of knowing the intentions of the system is an important step in trusting these machines.

One of the ways to help with this issue is through the architecture of the system. The methods shown in the webinar are modularity, transparency and verifiability. This way the user can see what the system wants to do, has some control over what it can do and a way to know how every part of the system functions. This can help as it is much easier to have confidence in a system when you can understand it and know that you can stop it if it begins to malfunction. With this style of architecture, the user can make the high-level decisions of what it wants the system to do. For example, the user can select the destination for their car whilst the system handles speed and obstacle avoidance. In this case the user can still have the use of the steering wheel if they feel that an intervention is needed.

The verification of a system is quite an exhaustive process where almost all scenarios are tested, and the response of the system is analysed. This can be done through real testing or simulations as shown in the webinar. The example used this time is that of an autonomous plane and what it will do if another vehicle is flying head on. Whilst this would very dangerous and expensive in the real-world simulation provides a reliable solution to this problem.

Finally, there is the problem of teaching the autonomous system the ethics of a human. A simple priority-based system is used in the webinar to show that the UAV in this case should value human life over animals and animal life over property damage. This is quite a simple example of an ethical system it is important to know what a machine will do if things do go wrong.

The main point that I got from this webinar was that it is important for the system to give the user feedback regarding its intentions and that the high-level actions are controlled by the user. Having some control or understanding over the system is an important step when creating this new technology.