To kick off media Mondays on Monkey Paw Robot Arm, I thought today I’d link to this recent podcast from NPR show On Point, about the threat posed by A.I. Tom Ashbrook, the host, makes Max Tegmark, head of the Future of Life Institute dance like a water drop on a frying pan on several points, rather amusingly.
The Future of Life Institute, hosted by M.I.T. are the guys that just got $7 million out of Elon Musk to protect us from evil robots/robot brains, a chunk of which they have duly handed out to a bevy of researchers. (Ashbrook talks to a couple of them as well.)
One thing that’s interesting to me, listening to this discussion, was the incredibly huge level gap between layman and researchers on this topic in particular. One of the core bits of being a journalist is acting as a bridge between the layman’s level of understanding of a topic and the expert’s level of understanding. It’s probably the trickiest bit, too; sometimes trying to blow a problem up into a big enough generality that a layman can understand it distorts its nature quite badly from an expert’s point of view. Of course, quite a lot of the time spending your whole life burrowing into the particular details of a topic makes one forget that distinctions which seem both clear and significant to an expert are invisible to a non-expert.
With A.I. in particular, though, it seems like the level gap is so large as to generate quirky absurdities. That is, the boots on the ground detail work that researchers are tackling right now concerns questions which are prosaic, humdrum — how do I get this robot to explain to me why it went left instead of right just then? — and the answers are technical, rigorous, mathematical. Whereas the level of the topic most laypeople are able to grasp is profound, philosophical — if computers become smarter than us, how do we control them? — and the only possible answers require delving into 2,000 years worth of human thought on ethics and morality.
Listening to the experts themselves try and bridge that gap is fascinating; in many instances the answer is simply “I dunno”. Tegmark in particular speaks of the need to align the goals of A.I. with the goals of humanity….but what are the goals of humanity, beyond survival? Peace on earth, goodwill towards men? To see your enemies driven before you and hear the lamentations of their women, to cite a different Arnie classic than the one that usually comes up in A.I. discussions? I don’t think we’ve managed to come up with a solid answer on this stuff in the past 6,000 years or so, and now it’s going to be left for the engineers to decide.