Moral Machine, do you know it? And if tomorrow, an autonomous car had no choice but to kill its passengers or pedestrians in the configuration of an unavoidable accident, how and who should it choose? Jean-François Bonnefon, director of research at the CNRS and the originator of the project, who wished to confront people with this reality, tells us more about Moral Machine.
Jean-François, can you introduce yourself in a few words?
I am a research director at the CNRS, and therefore a public researcher, and I work in parallel at the Toulouse School of Economics but I am not an economist. I'm a doctor of cognitive psychology, a resident of the TSE. I'm working on the general theme of rationality. My goal is first to determine whether people make rational decisions and then to identify possible sources of irrationality in decisions. I am interested in economic exchanges but also in ethical judgment.
"It has given utterly unexpected results"
Is that why Moral Machine was created? Can we talk about the genesis of this project?
It started to be considered early 2016, when a publication for Science was being prepared for publication in June of the same year. For the first time, the topic of autonomous cars was discussed, which could have to make decisions in the event of unavoidable accidents, when no trajectory can save everyone and the vehicle is therefore obliged to choose who will be the victims of the accident. And in this publication, among other things, it was shown that there was a consensus field among the interviewees that the car should morally save as many people as possible in a situation like this. We found this result again and again, except, and that's what intrigued us, in a scenario where there was a child in the car. There, the interviewees were much more uncertain. It seemed as if people did not weigh all lives in the same way, that saving the greatest number of people was an important principle, but this result, with this child who had weighed heavily on people's moral preferences, led us to think that this issue should be explored more widely. Do people in these choices weigh all lives in the same way? We realized that it was quickly becoming complicated. There are children, of course, the age to be taken into account, but also sex and many other characteristics. And as we compare groups of people in the scenarii, the combinatorics explode very quickly. As soon as you start to introduce variation into the scenarii, you end up with millions and millions of possible combinations. And to explore all these combinations, we concluded that we needed a tool to collect mass data, hence a website. That's how Moral Machine was born, it's a place where people could log in easily and quickly give their opinions on all these very complicated scenarii. In one year, we are approaching 40 million responses, proof that it worked very well!
Did you expect the respondents to be so enthusiastic?
It's true that it went faster than we thought! However, we had tried to put all the assets on our side from the outset. We had worked hard to get the site ready for our first Science article on the day of publication. It was hoped that the site would be mentioned in all interviews given about this first study, as a follow-up to this one. And this has given utterly unexpected results. When the article came out, there were hundreds of interviews and we noticed that it worked very well at the start. From the first day, the servers were saturated due to traffic on the site. Then, I think we have had several times the chance to be relayed by people with a very large audience. The day PewDiePie (more than 57 million subscribers on Youtube, ndlr) made a video about Moral Machine, we had spectacular fallout...
Do you think that there was an important expectation of the general public on the subject of ethics in Artificial Intelligence?
I think there's that, yes, but also a certain fascination. As soon as you start writing scripts on Moral Machine, you realize that the principles you had in mind at the beginning don't work very quickly. It's an experience that many people have had on the site: to say at the beginning that there are rules, that you have to respect them, before they realize that there are scenarii where they are stuck, where the principles they thought they were adopting are not appropriate. I think it also worked a lot: this pedagogical test which was for everyone to realize that it was much more complicated than it looked.
Moral Machine, it's finally making the choice of the least worse...
Make the worst choice and realize that you do this at times by intuition because it's hard to be consistent with yourself, with the principles you had in mind at the beginning. When driving, the question does not arise for the human being who will react to instinct.
"A general consensus" to save children first
What lessons have you learned from the 40 million responses you received on the Moral Machine website?
I can't tell you much about it because we're still under embargo in the press right now. On the other hand, I can at least mention the three factors that we were interested in: broad trends, individual factors, whether the individual characteristics of the respondents predict their preferences, and then geographical and cultural variations, i. e. whether we can observe groups of countries that would be closer than others, for example. On that, and coming back to the first thing I was telling you about the big trends, there is a general consensus on children, people put a very special value on children's lives. And that's important to know. In Germany, for example, the government has decided to mandate a commission of philosophers and ethicists to reflect on this issue. And the only clear conclusion of this commission was that they felt they could not discriminate between two people in such a situation on the basis of age. It will therefore be very difficult to get public opinion to accept it, which, as we have seen, would primarily save children. If there is one lesson to be learned from the responses received, it is that children are very special in this situation in the minds of people. We could have conflicts between what a government would like to do and what its public opinion would like...
Can you tell us about what you call the "utilitarian" car?
It was a word I had used to describe simply the autonomous car that would save the most lives in the event of unavoidable accidents. Of course, with the results of Moral Machine, we can push the principle a little further. A "utilitarian" car is never more than a car that decides on the consequences, rather than on a rule that ignores the consequences. For example, a car that would always save its passengers regardless of the consequences would no longer be a utility car.
But would people be willing to buy a stand-alone "utilitarian" car, which would not necessarily save its passengers in the event of an unavoidable accident?
This does cause internal conflicts among people. The people who answer us say that, morally, the car must take into account the consequences and save as many people as possible. But, as consumers, as far as the car they would like to buy is concerned, they would still prefer that it not sacrifice them or that it refrain from sacrificing them... But it's not a paradox, you just have to overcome its selfish interest to do what we think is best if everyone else does it too. It's the same thing with the tax return. It is clear that it would be in the common interest for everyone to pay their taxes, but everyone also sees a selfish interest in not doing so! The same goes for the car: we realize that if everyone had a car that would save the greatest number of people, it would be better. Mechanically, this would also mean that there would be less individual risk. But it means overcoming this selfishness that would make us buy a car that would save us only.
Precisely, do autonomous car manufacturers, such as Tesla for example, have an interest in selling "utility cars"? Wouldn't it be a hindrance to make a car available that doesn't necessarily save its passengers?
First of all, our goal is not to dictate public policy or the decisions of builders based on the answers given by certain populations on Morale Machine. Secondly, I don't think there is any manufacturer who wants to answer these questions individually. We saw the example last year of Mercedes when one of its managers suggested that Mercedes Autonomous Cars would save their passengers. It wasn't an official statement, but there was a general outcry. In these situations of dilemmas, and it's not for nothing that it's called that way, there are no good solutions. And whatever solution we choose, we will necessarily indignant part of the population. For a manufacturer, it is not interesting to make decisions that would make the cars a little more autonomous but would cause damage to their image such that they would finally want to sell conventional cars... These are really situations where builders can't win. We don't have to ask them to make that kind of choice.
And Moral Machine doesn't have that vocation?
No, we're not here to say,"this is the list of things your people are waiting for," Moral Machine is there to help a regulator or government make informed choices. If certain choices seem legitimate but ultimately run counter to public opinion, a regulator or government should know this. It does not mean doing what the public opinion wants, for example in areas where it is not necessarily well informed, but anticipating resistance and where there will be the most pressure, knowing the unpopular motives, knowing how to identify and anticipate them.
Do you feel listened to by governments, do you feel a particular support?
It's still a little early to say it, we're only at the beginning of the analysis of the results.
What are the steps for Moral Machine?
The report is being written and we hope to have a new publication soon. Unfortunately, the process is usually quite lengthy.