Robots can be awkward. Even the most advanced we have—DARPA's automated pack mule, Softbank's "emotional" machine—are reminiscent of toddlers taking their first, tentative steps. "The Robot" is so named because, no matter how smart our mechanical assistants seem to get, their movements are distinctively stilted.
What this has meant, among other things, is a world of service robots that have been extremely limited in their ability to make our lives a little easier. There are Roombas, of course—oh, such Roombas—but in terms of the robots that can interact smoothly and seamlessly with the world around them, picking things up and putting them away and otherwise lending us a non-human hand ... there aren't many. Algorithms rely on patterns; the patterns of human life are notoriously difficult to discern.
Which is why we have put a robot on Mars, but we have yet to avail ourselves of a robot that can clear the table—or do the dishes, or do the laundry, or make the bed—for us. Robots, like humans, have to coordinate their intelligence systems with their physical outputs. They have to negotiate around a physical world that is full of uncertainty and surprise, using vision—"vision"—that is blurry and out of focus. They have to link their senses to their sensors.
As Ken Goldberg, a professor of engineering at Berkeley, describes the experience of being a robot: "Nothing is reliable, not even your own body."
That kind of sympathy for the robotic experience has led to a new approach to robotic design: "belief space." Which has nothing to do with spirituality—unless your particular religion happens to involve robots—but is related instead to the robots' ability to interact with the physical spaces they occupy. "Belief space" is robots' ability to understand those spaces via statistical descriptions—descriptions of probability distributions.
So if you're a robot, and your task is to pick up a coffee mug ... how do you do that? How do you grasp an object in the way that will allow you to pick up the cup with ease? You'd want to do just what a human likely would: to use the mug's handle to do the grasping. But then: how do you distinguish the handle from the rest of the mug? If you don't have a brain—and, with it, anecdotal experience—that will differentiate mug from handle from table from chair ... how do you complete the task that is so basic for humans?
"Being able to process belief space was extremely daunting," Goldberg said, during a talk at the Aspen Ideas Festival this afternoon. But processing it, it turned out, was a matter of collecting experience on behalf of the robots: You can use the basic framework of the Internet—networked information-sharing—to allow robots to learn from each others' experience. You can have robots communicate their learnings—the curve of a mug handle, for example—to each other. It's networked knowledge, robot-style. "Robots are now getting on the Internet," Goldberg says, "to share information and software."
Using that unique kind of crowdsourcing, Goldberg and his fellow roboticists are figuring out ways to help automated machines analyze uncertainty—and, more importantly, developing statistical models that allow the robots to predict, over time, the way they're supposed to treat and move certain objects. Goldberg, for his part, is developing what he calls a "nominal grasp algorithm"—an algorithm that helps robots both to identify objects and to understand where to grasp the object for pickup. And he's developing it with the help of this roboticized Internet.
Which means that, soon, robots could be picking up your mugs ... and clearing your table.
