Robots can do some pretty cool stuff these days, but so far most of it is trivial in the grand scheme of things and I think we’re a lot farther off from true artificial intelligence than most people believe.
I read a Wired article about new robot hands, and it discussed the methods scientists are using to train the robots to perform a simple task. The first robot, Stair, is trained to learn from its mistakes, and as it tries to pick a glass from a dishwasher it keeps trying new methods until it picks up the glass. The second robot, UMan, is trained to analyze objects, figure out how it can interact with them, and store that knowledge for the future.
That’s impressive and all, but people learn both ways. There are those who want robots to be domestic servants and even domestic helpers, aiding the elderly with various tasks and dispensing their medication. A lofty goal, but would you want grandma’s robo-buddy to learn its lessons by injecting her with the wrong medications or turning her upside-down in the shower because she just happens to fit that way? Before you say that’s unfair, remember that robots will have to be at least as reliable and safe as trained human beings before robots can replace humans.
It got me thinking that maybe the scientists are approaching robotic programming the wrong way. A robot’s world is literally black and white: everything is binary. That world gets even smaller when we realize that a robot is further limited to its programming. Stair can pick up a glass, but what happens when you put a bowl in front of it? Time for more programming. It’s not a great leap beyond the robots assembling cars, which have one task they do repeatedly. As the article mentions, you move a part six inches to the right and that car-assembling robot is lost. The new robot may be able to find the part, but move it to a new point on the assembly line with a new job and it has to be reprogrammed.
“But wait, Mike,” you might be saying. “UMan might be able to find the part and figure out what to do with it.”
Not necessarily. It may figure out the parts and the tools, but will it be able to assemble a car? Even if it does (which I acknowledge would be impressive), it needs to know what a car is. It needs to be programmed with the blueprints and guides, and it needs to do it in the proper order. Now let’s say it just built a Ford Focus. Take UMan and put it on the Ford Taurus line. Whoops, time to start from scratch again.
Better than the robot arm? Certainly. Ready to take humans out of the equation? Not by a long shot.
Which leads me to the first thing humans have that robots don’t: imagination. Whatever its origin — evolutionary or divine, natural or supernatural — imagination is what sets us apart from the rest of the animal kingdom. Where animals adapt to their environment, we adapt our environment to us. We have the abstract thinking to apply objects to different tasks. Robots test medicine for us by putting different compounds in petri dishes with different cells and cultures in them, but they’d never come up with that idea on their own. UMan may figure out how to work a can opener, but I doubt it would ever think to open a package with the can opener.
Even the world’s smartest monkeys are light years ahead of robots because monkeys have a capacity to learn. We can teach a gorilla like Koko sign language because it not only learns the gestures but can learn to apply them. Teaching a robot sign language doesn’t mean it will pass the Turing Test (and even if it did, it would be because a human programmed it with specific responses based on human psychology, not because the robot figured out how to communicate on its own).
Take UMan to the next stage of its learning evolution, and it still won’t compete. When you teach a child what a dog is, she understands different breeds of dogs are still dogs, even if she doesn’t understand what a breed is. If you show UMan a Chihuahua and tell it “this is a dog,” what will its reaction be when you introduce it to a Great Dane? Or an Old English Sheepdog?
The brilliant reCAPTCHA project is a perfect example of this. A human being can recognize different fonts and still recognize the letter Z if part of it were smudged. Optical character recognition software, despite years of development, cannot. As a result, researchers are now using human brainpower to help the computers decode obfuscated text in scanned classical texts.
Robots also lack judgment. Take Aiko, for example. Her creator gropes her breast and she gets angry. (We’ll put aside for now the fact that only a human would think of molesting a robot.) Pretty cool at first, but she’s only reacting that way because he programmed her to. She wouldn’t be any more offended than a slab of iron if she didn’t have the programming. The guys behind Real Doll would probably pay him a fortune to program her to purr with pleasure instead, but that wouldn’t be any more authentic a reaction.
Robots lack instinct. If a toddler trips and falls, he pulls back his head and throws out his hands. A robot, on the other hand, would fall flat on its face unless it were specifically programmed to catch itself. We swing pads at students in karate class, and even if they don’t automatically raise their hands to block, they shrink away or turn their heads. If we were a real threat, they’d run away. Attack a robot with a hammer and you can beat on it all day if it weren’t programmed with specific fight-or-flight responses.
There may be a way to cheat some of this. A robot’s capacity for learning is limited only by its storage capacity. Filled up a hard drive teaching it sign language? No problem. Install a second drive and teach it Japanese. Moore’s Law says robots will be able to process this data faster and faster, so that shouldn’t be a problem, either. Especially as mechanical (or chemical?) articulation improves and rivals human physical capabilities. The robotic evolutionary leap comes into play when they can start communicating with one another.
I’m not talking about Ethernet, here. I’m talking communication of ideas, of concepts. If Stair could teach UMan all it has learned, and they in turn teach their successors all they have learned, and so on, it only takes a few generations before you can build a sizeable catalog of robotic intuition. A future robot could conceivably come online knowing everything every robot before it has ever learned. Throw a solid grasp of learning by imitation into the mix, and its not long before you get a reasonable facsimile.
It’s still far from human in ultimate potential, but now we’re talking enough to at least consider putting them into more complex roles than assembling cars and pouring test tubes. I wouldn’t want one taking care of grandma, or in fact doing much of anything with human lives at stake. Make one a traffic cop and it will probably do fine while human drivers cooperated. What happens when someone ignores it, or attacks another driver in a fit of road rage? I’d rather see ¨berbots sent to Mars to collect samples and explore crevices than placed in an old folks home.
In short, I think we’re far better off concentrating our efforts on supplementing — perhaps even augmenting — humans rather than replacing them, because that’s never going to happen.
One final thought: there are many people who think Isaac Asimov’s Three Laws of Robotics make robots safe to put in positions of responsibility over humans. They seem to forget that Asimov spent a good portion of his career figuring out how those same Three Laws are flawed.