Does it matter, that a robot is just a computer with arms? That seems to me to be the core of an interesting debate breaking out in the legal world.
Ryan Calo, a law professor at the University of Washington, has written a lengthly law review article essentially arguing, “Yes, it does.” (He summed up the thrust of his arguments in this Slate piece, however, if you’re not up for a 50-page pdf). While the law has had twenty-odd years to adjust to the complexities introduced by computers and the internet, the fact that robots are designed to interact with the physical environment in ways that their designers cannot always predict will pose entirely new legal challenges when they go wrong, Calo suggests, and lawyers and lawmakers need to start thinking about some of these dilemmas.
On the not-quite-contrary side is Yale scholar Jack Balkin, arguing for a more contextual approach; humans will employ different kinds of robots in different ways for different tasks. Defining a single shared trait among different kind of tech as “essential” and making it central to the law around this tech could limit people’s thinking and make for bad law, Balkin suggests.
I find myself sympathetic to Balkin’s argument to some degree — a small problem I’ve already run into trying to launch this blog is that it’s damn difficult to come up with a hashtag or a catchphrase that neatly encapsulates everything I plan to cover; the closet I’ve come to pinning a name on my chosen subject is “intelligent machines.”
That doesn’t necessarily mean a robot: An artificial intelligence need not be able to physically move a chess piece to come up with a winning strategy. So called software bots or deep learning algorithms have been having a meaningful impact on the economy at least since the “flash crash” of 2010.
And yet I can’t help but think that Calo’s onto something as well, and embodiment may fundamentally alter the legal paradoxes posed by bots, because that embodiment is now linked to another tricky characteristic. Both Calo and Balkin prefer to dub this characteristic “emergence” e.g., that bots and algorythims sometimes come up with answers or take action without their human operators being able to predict those decisions beforehand or understand, entirely, why the bots took the actions they took.
For example, a few weeks ago, Google released a bunch of quasi-psychedelic images produced by its photo-recognition algorithm as part of an experiment; the media was happy to dub them “computer dreams”. What the engineers were really trying to do, however, was come to a better understanding of how the hell the algorithm was recognising photos in the first place.
A lot of “deep learning” or “machine learning” apps use this same technique Google did: You create a program which has the ability to thresh out similarities among the members of a set. You feed it an example set and tell it all the members share some similarity — for example, “these are all pictures of faces”, or Pepsi cans, or Golden Retrievers. You leave the computer to winnow and winnow and winnow until it comes up with a set of rules that allow it to identify whether an image has a Pepsi can in it. You can then feed it a set of unknown images and tell it to tag all the Pepsi ones.
But though the program will generate its answer, you still don’t always understand the rules which it used to make the call. Google’s experiment attempted to tackle this problem by instructing the program to generate entirely new images based on the rules it uses to identify subjects; by seeing what was included in the novel images the engineers could get a better sense of what features the program considered essential. (And which ones it was wrong about: Barbells do not always come attached to muscular arms.)
Not understanding exactly how an algorithm is able to find and tag pictures of the cutest little puppies on the whole world wide web is one thing. But that same opacity when it comes to other types of decision-making can present much more serious problems, especially as computers are thrust into more complex real life situations.
Jamie Lannister, of all people, states the dilemma rather neatly: “So many vows. They make you swear and swear. Defend the king. Obey the king. Obey your father. Protect the innocent, defend the weak. What if you father despises the King? What if the King massacres the innocent? It’s too much. No matter what you do you’re foresaking one vow or another.” Even with the best of intentions and simplest, clearest guidelines, in real life a robot may come across a situation in which those guidelines conflict — when acting to prevent harm to one human may cause harm to another (a robo-car swerving to avoid an accident, for example).
Deciding to kill your king in order to prevent a massacre is a pretty extreme example. But there are plenty of everyday situations which present risks — if a human and a bot are both cutting across a factory floor, how wide of a berth should the bot give the human? What do the chances of a collision have to be to stop the bot in its course? What if it miscalculates?
These types of ethical conflicts present a problem for the law as well, of course. As Balkin puts it:
The problem of emergence is the problem of who we will hold responsible for what code does….self-learning systems may be neither predictable nor constrained by human expectations about proper behavior… [I]n most cases, it will be difficult to show either a deliberate intent to harm or knowledge that harm will occur [on the part of a robot owner, operator or designer]….If the law hopes to assign responsibility to humans and corporations, injuries by robotic and AI systems may strain traditional concepts of foreseeability.
But here’s where Calo may have a point: A unpredictable machine which must solve an abstract moral dilemma is one thing. Confronting the Kingslayer’s Conundrum is a different matter when you’re talking about a machine which has the ability to manipulate objects in the physical world — in other words, one which could possibly do some actual king-slaying. Or car crashing or factory worker crushing.
Then again, I’m not so sure that Balkin’s entirely right when he says that “traditional concepts of forseeability” may not apply here. After all, do we not have some existing law about how to treat an animate object, capable of making decisions and taking actions its owner may not always be able to predict or control? Specifically, all the ones dealing with keeping dogs on a leash? Or horses in a pasture? What both Calo and Balkin tiptoe around calling “emergent” behaviour, we are satisfied to call evidence of intention in other contexts, or intelligence. And when this quality is exhibited we call the thing exhibiting it a form of being. It’s striking how quickly implanting even a limited decision making capability in an object brings you around to questions of personhood, or of life. How much more or less complex than a Baxter is a flatworm or a garden slug?
As Whitney Crooks put it in my interview with her, we shouldn’t forget that today’s robots are “pretty dumb.” Arnold may be back, but we’re still very damn far away from Skynet being a thing. But nevertheless, Baxters are already in factories, and if Whitney and people like her have their way, they might soon be in homes, too. It’s fascinating to me how quickly we’re plunging into this new era of innovation before some of these deep moral quandaries this tech will present us with have even been thought about, never mind solved.
Reblogged this on My Blog News.
LikeLike