The discussion of the ethics of autonomous vehicles offers the opportunity to raise many of the ethical issues of AI and discuss them according to different ethical frameworks. But, first of all, what does autonomy mean and how is it used in this context? Here we use the term operative autonomy to describe the ability of a system to perform its tasks without a continuous human supervision, so that this system can adapt to dynamic contexts and unexpected situations. In the case of autonomous vehicles, the notion of operative autonomy can be translated into a framework that specifies 5 different levels of autonomy. It starts from L1, where basic functions as accelerations are automated, to arrive at L5, that expresses the complete autonomy of the vehicle from the human being under any condition. The development of autonomous vehicles has pros and cons from a moral point of view. One of the reasons to develop autonomous vehicles is to increase the security of transports while reducing in a significant way accidents. Actually, autonomous vehicles are better than humans in obeying to the existing laws; they cannot get tired, drunken or bored. Other reasons are: to increase the autonomy of those people, like the elderly, that cannot drive; to make traffic more efficient and, thus, to reduce pollution. At the same time the development of autonomous vehicles raises some threats to individual privacy, freedom, and the attribution of responsibility. Let’s focus now on unavoidable collisions that allow us to discuss many ethical issues under different ethical frameworks. An unavoidable collision is a condition in which the vehicle cannot avoid the impact with an obstacle. Even if this situation is undesirable, it is not possible to completely exclude it. It is thus clear that in the case of unavoidable collisions, we delegate to autonomous vehicles not only a number of actions, such as the parking operations, but also some moral choices. For example, an autonomous vehicle could choose the angle of the impact so to maximize the protection of its passengers. Let’s suppose two different scenarios. In the first one, two cyclists are crossing the road in a direction which is orthogonal to the direction of an autonomous vehicle: one of the cyclists is wearing a helmet, the other not. The autonomous vehicle cannot avoid the collision with one of them, but has still enough time to decide with whom. How should the vehicle decide to act in this situation? In the second scenario, the autonomous vehicle has in front of itself two pedestrians crossing the road. Here two alternatives are possible: either the vehicle will invest them or will steer away from them, possibly causing serious problems to its passengers. Let’s try to analyze these questions according to different ethical frameworks. One of them is consequentialism also known as the ethics of consequences. According to consequentialism, decisions and actions are evaluated as morally good or bad according to the consequences they bring about. But what is a morally good or bad consequence? Here, different criteria can be used. For example, a criterion is the so-called utility principle which is the basis of the ethical framework known as utilitarianism. So, according to utilitarianism, one should choose those actions that result in the greatest happiness for the greatest number of people. So in general we can say that to apply consequentialism to autonomous vehicles in the case of unavoidable accidents means to minimize in an impartial way the damages. However, it is when you try to go into the details of the different scenarios, like the ones previously sketched, that problems arise. For example, to minimize the damages to the people involved in scenario number 1 (the one with cyclists), the vehicle should be programmed to collide with the cyclist wearing the helmet who is more protected and, thus, likely to suffer less serious consequences. This is of course quite debatable, but is what emerges if we consider this issue within the context of a purely consequentialist approach, where the goal is to minimize the expected damages. Let’s move on and try to analyze the ethical issues of autonomous vehicles in the case of a different ethical framework, that is duty ethics. Duty ethics, also known as deontological ethics, is the class of approaches in ethics in which an action is considered morally right if it is in agreement with a certain moral rule. These rules can have different origins. They can make appeal to a social contract that the involved parties have implicitly agreed to (for example, a company code) or are based on reasonable arguments. Think for example to a parent using an autonomous vehicle to bring her daughter to school every day. The safety of the daughter is a duty to this parent and, more important, is independent, from the evaluations of the possible consequences. Can we design the autonomous vehicle to prioritize the safety of the kid with respect to any other considerations? To answer this question, many issues emerge, also in the case of Duty ethics souch for example the rights of the other users and the possible social tensions derived from this choice. In the discussion of the ethical issues of autonomous vehicles, it is important to consider also the problem of the attribution of responsibility. Who is responsible in the case of damages to people or goods? Is it the company producing the vehicle, the engineers designing it, or the owner? To answer this question is particularly difficult because none of these subjects has a direct and complete control over the autonomous vehicle. Several solutions are discussed in the literature. They depend on many elements, and also on the ethical frameworks adopted. It is clear here that to fill these policy vacuums deriving from ethical analysis is a key element in the development and application of autonomous vehicles in our societies. Here another important element emerges: how to translate the ethical analysis into concrete policies. An interesting example is the case of the ethics commission appointed in 2016 by the German Ministry of Transport and Infrastructures. A year later the commission has published a code of conduct with 20 guidelines that mix rules deriving from both duty ethics, such as the prohibition to discriminate on the basis of gender and other individual characteristics, and consequentialist ethics, such as a design approach devoted to minimize risks. To sum up, in this lecture we have seen that ethics does not solve in a definitive way the issues raised by the development and implementation of autonomous vehicles. Rather, ethics allows us to better understand the difficulties at stake. This does not mean that ethics is useless. On the contrary, ethics is the best tool we have to decide regarding moral issues in a meaningful way.