The only way is ethics, but whose?
The discussion around the automation of human processes is gaining some heat as experts debate the idea of robots becoming all singing, all dancing and all thinking machines that make human ethical decisions.
Some experts predict that we’ll have fully automated, interactive robots within ten years. But, those ethical dilemmas that all humans face on a daily basis, even the simple judgements are all made based on basic human values that we each have. Human ethics in a human world interpreted by a hunk of metal suddenly become subjective based on the creator of the robot. If robots are to make judgements based on human morals, then who will be playing God?
Let’s take holding a door open for someone. A simple act! We’ve all been there, you hold the door open for one person, then one, two, maybe three walk through with sheepish grins while they ‘tuck’ themselves through. ‘You’ll be there all day’ is the standard retort (usually with a cursory toss of the head!). That is until you make a judgement on who takes over your self-administered role as doorman or decide that it’s time to move on. That tiny pause, glance or gesture is your signal to let go. Would a robot know when to let go? Ok, this is a simple example, BBC’s David Edmonds explains it far more eloquently on the BBC – “Can we teach robots ethics?” with more concerning examples such as an automated car taking the decision to crash into a single person or a group to minimise fatalities or sacrifice one to save many – have a read, it’s a compelling dilemma. But it does demonstrate the question of not only the decisions robots will need to make but ethically whose value system will those decisions be based upon.