Roborei.com has teamed up with Nikil Mukerji an economist and philosopher by training who currently works as a researcher at the philosophy department of Ludwig-Maximilians-Universität München (Munich, Germany). He has just released: “Towards a Moderate Stance on Human Enhancement” and will soon release “Autonomous Killer Drones” which will be his newest research paper evaluating the moral decisions for autonomous machines. This is something everyone in the robotics and drone industry is concerned about and we look forward to reading his findings.
Today we will be asking Nikil Mukerji some questions related to driverless vehicles and the future integration of robots and humans .
Interview Question And Answers
Hi Nikil can you describe where you work?
LMU München is one of the largest universities in Germany and, I believe, one of the best environments for doing research and teaching in the country. This applies in particular to the chair I work for. It is held by Julian Nida-Rümelin, who is one of the leading philosophers in the country and served as minister of state in the first cabinet of chancellor Gerhard Schröder.
Can you describe to our readers what you do day to day?
There are four things that I do on a daily basis. I serve as the academic director of an executive degree program in philosophy, politics and economics called Philosophie Politik Wirtschaft. Obviously, being an academic philosopher, I also do a fair bit of teaching and research, mainly in moral and political philosophy. Last but not least, I am a consultant for business executives and state-level politicians with the Institut für Argumentation at Munich. The institute, which I co-founded, uses an interdisciplinary approach (with tools from philosophy, psychology, economics and various other scientific areas) in order to solve problems that arise in the workplace.
Can you describe your involvement in the Robolaw project?
Robolaw was a research project (FP7) funded by the EU, which aimed to better understand the legal, moral and political aspects of the regulation of emerging technologies (incl. robotics). Within the project I was mainly responsible for the analysis of ethical issues to do with various forms of human enhancement. But I also went into issues that arise in the context of robotic technologies. To be honest, I most enjoyed thinking about what philosophers can learn from robots and what futuristic scenarios depicted in literature and film can teach us. For further information see my paper: “Why moral philosophers should watch sci-fi movies”
Are you embracing or scared by the prospect of driverless vehicles?
I guess I am a pragmatist when it comes to the regulation of new technologies. There are always pro’s and con’s that you have to take into consideration before you can judge whether an emerging technology should be used and, if so, how and to which extent.
We obviously have to get a clear grasp of the benefits and potential risks that the technology in question promises and we must make sure that there will be fair and equal access to it – at least in the long run.
Driverless vehicles are no exception in that regard. At this point, many people are scared by the prospect of having them on our streets. This, I guess, is understandable from a psychological point of view. After all, how can you trust a mindless entity to make the right judgement calls when it matters, right? But many of those who entertain such doubts tend to overlook the degree to which we already trust technical systems. Modern train networks, e.g., would be impossible to handle if machines did not call most of the shots. If you are sceptical about driverless cars in principle, you should not set foot on a modern train either.
Above that, it is not clear that the human factor is always an advantage. The recent Germanwings crash, which was very likely caused intentionally by the co-pilot, is a tragic case in point.
Will driverless cars be as safe as expected?
I don’t know, I am not an engineer. But I doubt that there is a clear-cut answer to that question. Technologies are neither safe nor unsafe in themselves.
It depends on what we do with them. To give you an analogy:
Conventional car travel is neither safe nor unsafe either. I am from Germany, where it is legal to drive like a hellraiser on most motorways. This morning, I could have chosen to drive to work at 160mph, had I wanted to. (I chose the train instead.) The fact that I can do that significantly increases the risk of a very serious accident.
Similarly, the safety of self-driving cars will depend on the specifics of their programming, traffic density and speed to name but a few. All of this can be regulated! That said, I should emphasize that self-driving cars are an amazing feat! They are already able to handle high speeds and difficult tracks much better than most human drivers.
How will driverless cars deal with the responsibility to decide between life and death of the passengers in an accident?
That’s a tough one! We will obviously have to discuss how driverless cars should react if something goes wrong and lives are at stake. There are a number of problems that I see here.
- Firstly, professional ethicists (as well as the general public) disagree on a host of moral issues. So it won’t be easy to get a consensus.
- Secondly, it is hard to see at this point how we could implement even the moral judgements that most of us share.
Consider, e.g., the famous trolley problem, which is obviously relevant in the context of driverless cars.
Imagine you are the driver of a trolley car whose brakes have failed. You are headed towards five workers who are working on the tracks ahead of you. You know that you will kill them if you run them over. Then you realize that you can turn a steering wheel. If you do that, your trolley car will take a left at the next turnout. Unfortunately, there is one other worker on the left track. He will get killed if you choose to take a left. What should you do?
It is clear that both options are bad. But the second nevertheless seems preferable to most of us because fewer people die. This, however, does not mean that you should always choose the option that is associated with fewer deaths. Most of us agree, e.g., that you should not push a person in front of the trolley to stop it, if this kills him and saves the five. There are fine nuances and it is hard to see how they could be implemented via an algorithm.
In short, then, I don’t know how that problem will be handled. But I don’t think that it is fair to reject self-driving cars on that basis. When I took my driving test I didn’t have to pass a moral judgement test and even if I had, I probably would not be able to implement the “right” moral principle in a split-second decision. I think the relevant question is not, then, whether driverless cars will be able to handle all situations perfectly, but whether they can reasonably be expected to perform just as well as (or better than) human drivers.
How will driverless cars affect the privacy of passengers?
Potentially, they can have a very negative effect in that area because they are essentially giant data gathering devices (just like cell phones, by the way). But, again, the issue here is regulation! It is certainly possible to impose stringent restrictions on data gathering and usage. But I suspect that strong interest groups will push for a different solution. We are thus well advised to stay vigilant.
Driverless car sharing will clearly benefit the environment but will it come at a cost of lost sales and jobs for the car manufacturing industry?
I’m not sure to which extent that will be the case. I suspect that the vast majority of car manufacturers will anticipate these developments and will quickly adapt to the new circumstances.
But automation and new technologies always destroy certain types of jobs. To that extent, they are a two-edged sword. While society at large benefits from technological developments and social innovations like car pooling, a number of people are always thrown under the bus.
Therefore, it is important – and that is a more general ethical point – that technological progress and solidarity go hand in hand. If we as a society treat fairly those whose particular skills are made obsolete by technology and help them to acquire new skills, we have a much better chance to implement technological changes.
In general are you concerned about robots taking humans jobs and why?
Whether you should be concerned about robots taking your job depends mostly on what you do. They will certainly not take over my job too soon! So I am not worried about my own job prospects. However, anyone whose job is heavily based on manual labour is at great risk of losing it to a robot.
That’s a problem – and not just an ethical one. On the other hand, robots are quicker, more endurable and they don’t mind dull, dirty and dangerous tasks. So there’s an upside here as well! Perhaps people shouldn’t do many of the jobs they are currently doing – at least not for a living. I guess what it comes down to is that we have to organize the change in a way that fair to all. That way we can harness the benefits of the robotic revolution, while making sure that it doesn’t cause any major social problems.
What will happen with the findings from the robolaw project?
Time will tell. We have put out a number of deliverables, which are aimed to guide policy decision at the level of the European Union. The public ones can be downloaded at: Robolaw.eu
Whether and to which extent they will be taken up, is hard to foresee. As far as the science is concerned, we have made a number of contributions to the scholarly literature (see e.g. F. Battaglia, N. Mukerji and J. Nida-Rümelin (eds.), Rethinking Responsibility in Science and Technology, Pisa University Press, 2014). A number of issues are still unaddressed, however, e.g. a host of issues to do with automated warfare. I hope that there will be a Robolaw II that will take them up.
How do you believe robotics will affect the global economy increase/decrease and why?
Overall, robotics will have a positive impact on the global economy, but not unequivocally. Whether it has a positive impact on the various national economies depends on what they specialize in. Developing countries have many low-skilled workers, whose jobs are threatened by technology. That is a serious humanitarian problem. Perhaps the biggest risk of robotics is that it might increase the plight of poorest and cut them off from social cooperation.
Will robots take over the world? Why/Why not?
That depends on what you mean by “take over the world”. They have already taken many areas by storm (e.g. the car industry) and will certainly advance into other areas (e.g. into medicine and care).
If you are asking whether robots will, at some point, overthrow and enslave us or use us as human batteries (like in The Matrix), I’m only able to speculate. But it is safe to say that for that to happen, robots would have to become self-conscious first. And it is debatable whether that is even possible.
I, for one, would not rule out that possibility. We know that carbon-based systems can develop consciousness. Why should this be impossible, in principle, in silicon-based systems? But a number of people whose views I respect think differently. And they would give you a different answer.
What will be the biggest challenges for humans with the coming of artificial intelligence?
That will depend on how intelligent technical systems will, in fact, become. At this point, computers are able to process certain types of information much faster than humans. But they do not appear to be conscious.
To be sure, they can mimic certain human characteristics. And we may anthropomorphize them in a number of ways. But we do not have any indication that computers can have conscious experiences – not yet anyway. Should that change, we are in for a host of ethical problems.
Robots could then arguably be regarded as moral subjects and could be seen as bearing moral rights. And not affording them those rights could be seen as a serious moral failure, as e.g. Nick Bostrom has proposed. That, I think, will cause serious problems of acceptance and could be a source of tremendous conflicts.
Roborei.com would like to thank Nikil for offering some great insights into what the future might hold for robotics and their integration into the human world.
We also strongly recommend you keep an eye out for his upcoming work on “Autonomous Killer Drones” this will surely uncover some unique perspectives for the future.
Once again thanks to Nikil
And for the rest of you stay tuned for more insights from around the globe.