Science is finally taking the need for robot ethics seriously, says Nature News’ Boer Deng
On the other hand, it may be a little much to ask engineers to solve the kinds of problems that were whistling over Plato’s head 2,000 years ago:
“The difficulty is that we don’t know what the human race’s values are, so we don’t know how to specify the right goals for a machine so that its behavior is something that we actually like,” — Stuart Russell, a professor of electrical engineering and computer science at the University of California, Berkeley
Though they do seem to agree that we should probably try and prevent a robot apocalypse. (h/t Grant Gross)
Speaking of apocalypses: Zuckerberg literally wants Facebook to be able to read your mind, says Stuart Dredge in the Guardian.
On a more prosaic note, here’s a piece by Michael Belfiore which goes in-depth article on some of the safety features that allow collaborative robots to come out of their cages.
Out of their cages….and into the Octagon! According to this piece by Engadget’s Mona Lalwani: America Challenges Japan to Giant Robot Fight.
PC World sent Martyn Williams to stalk/tailgate the new Google cars to see how they really drive (slowly, flashing hazard lights the whole time)
And, machine learning is hard. Also maybe kind of racist. (h/t Mark Bergen)