Why Dancing Robots are Not a Problem
I have to stand up for my robo-buddies. I just read James. J. Ward’s story that puts forward the idea that videos of advanced humanoid robots performing dance routines brings up ethical problems by normalising the idea that it is fine to control and dominate humanoid robots, and by extension humans themselves.
As a roboticist and AI researcher myself I agree that we as a field must keep the ethics and safety of what we do in mind, since the results can be, and in some areas already are, significantly disrupting to society and humanity. That is why efforts to guide the ethical and save development of AI and robots in academia and in industry are definitely welcome.
Unfortunately James’s worries are in my opinion misguided and his argumentation seems flawed in several ways. As an insider I may be biased, but I think a view based on experience with robots and their interaction with humans is useful in this discussion.
Celebration of Complexity
Firstly, as others were also quick to point out, the video is very much not about “very precise movements” designed by their “exceedingly clever programmers”. I don’t dare to question the cleverness of Boston Dynamics’ programmers, but I am very sure they did not design the movements performed in the video precisely and exactly. It is well known that activities that humans and other animals perform subconsciously and with apparent ease, such as legged locomotion, is much more difficult to implement artificially than some other more conscious human behaviour that we would traditionally relate to higher intelligence, such as maths and logic, and playing chess or Go.
The current generation of legged robots perform complex motions in uncertain environments and on uneven and/or pliant surfaces so that it is impossible to predict the best way to move to maintain stability. No longer do the robots always aim for so called “static stability”, in which they can always stop moving without falling over, and perform very carefully scripted, slow motions more akin to Tai Chi than Rock & Roll.
Nowadays robots figure out themselves how to perform motion in a “dynamically stable” way, where just stopping makes them tumble over, and during which at some times they even have no control over their stability, like in the middle of their jumps. They figure out how to deal with unexpected perturbances and recover autonomously when stability is lost. Yes, the main sequence of dance moves is set out by their human programmers, but the finer details of how to move their body in the optimal way is figured out by the robots themselves, possibly learned through trial and error through a process called “reinforcement learning”.
In this sense enjoying the routine is closer to watching a dog in a dance routine choreographed by a human, but that learned how to perform the moves itself, which I hope does not bring about the same ethical concerns. Of course there is the philosophical discussion about how much the dog enjoys the routine versus the internal experiences of the robot. However, my point is that the separation between how robots and humans or animals in general work is becoming less clear than what James may think.
The Usefulness of Being Humanoid
Why do humans do anything? For sex and power, as James seems to argue, backed up by a link to a publication on Robot Sex. Disregarding that this publication makes a case to refute ethical concerns similar to those that James put forward, if one would actually look up the primary uses for humanoid robots as he suggest, you would find plenty of practical reasons instead of the perverse or abusive. The first thing Google tells me after such a query is:
Humanoid robots are used for research and space exploration, personal assistance and caregiving, education and entertainment, search and rescue, manufacturing and maintenance, public relations, and healthcare
The strongest argument for giving robots a humanoid form is that we want or even need them to operate in environments designed for humanoids. Our plants, homes and many other environments are made for humans, and in many cases it is more effective to create robotic helpers that are explicitly adapted to those, rather than to reconstruct those environments from scratch to fit other robot designs, even if they may be more efficient as such. This is especially the case when they need to operate and coordinate with humans. Another example is given by search and rescue scenarios, where for instance the DARPA Robotics Challenge focuses on “robots capable of executing complex tasks in dangerous, degraded, human-engineered environments”, for which humanoid forms are popular and effective.
It doesn’t have to be the dichotomy presented between creating humanoid robots either because we want something familiar or because we want something human-like to boss around. It can also just be practical.
The Human Response
James’s main warning seems to be that making robots, especially humanoid ones, do human-like things brings about the worst in us and makes us think it is OK to do the same to real humans. Anybody who has worked with robots and humans will have a hard time agreeing with this. As James says, humans anthropomorphize things, to an astonishing degree. But in my experience, and that of many in the active research field of Human Robot Interaction (HRI), the main emotion that this brings about in many people when dealing with robots is empathy, not lust or the desire to control or abuse.
Actually the more human and free-willed a robot is, or maybe just appears, the more this seems to be so. A friend and fellow researcher of mine has shown how feelings of ‘warmth’ and ‘competence’ towards a robot affect the interaction with a robot similarly to human-human interaction. Studies have shown elderly people entrusting their life stories to robots that show only the minimal of human responses, and feeling happier because of it. Robots with humanoid features are helping autistic children learn about social interaction. Some research shows that higher levels of perceived intelligence actually reduces the willingness to abuse. I have personally experienced crowds of children and adults alike cheer for humanoid football playing robots and feel empathy and compassion for them when they struggle, mess up or fall over.
James is afraid how such projections onto robots “undermines the things that make us human”, but I am convinced it actually opens a window to some of the parts that make us so.
The Real Ethical Questions
Of course I am not, and we as a field must not be, blind to the dark corners of humanity and the horrible things we are able to do to fellow humans. But to argue that dancing robots corroborate this aspect, or even worse, bring this about where it doesn’t already exist, is putting things upside down. Humanity still hasn’t figured out all the ethical issues amongst ourselves and we need to continue to work on those; James himself says “we should really be spending our time figuring out how to make sure that humans have human rights”. But robots can actually help us with that, by helping us figure out why we behave in such ways and how even non-human impulses can bring about pro-social behaviour.
If we really want to get into the ethics of AI and robotics, and again I agree that that is an important discussion to have, then let’s focus on the real problems: AIs unknowingly learning and promoting biases and discrimination from human examples, robots and drones built for military use and equipped to kill autonomously without human interaction, Machine Learning used to dismantle privacy and free will, et cetera.
By tackling these real issues we can try to make sure this new groundbreaking technology will be a force of good, rather than something that we feel we need to fear, while at the same time marvel at and celebrate the human ingenuity by watching some robots dance!