Columbia computer scientists and the Toyota Research Institute are making advanced home robots a reality. Robots are improving the quality of human life, augmenting our abilities, freeing up time, and enhancing our safety and well-being. Handling complex requests, however, will require greater mobility and intelligence.
Columbia Engineering and Toyota Research Institute computer scientists use psychology, physics, and geometry to create algorithms enabling robots to adapt to their surroundings independently.
A longstanding challenge in computer vision is object permanence, involving the understanding that the existence of an object is separate from whether it is visible at any moment. Most applications in computer vision ignore occlusions entirely, losing track of items that become temporarily hidden from view.
The scientists created a machine that watches many videos to learn physical concepts, training the computer to anticipate what the scene would look like in the future. Teaching the machine to solve this task across many examples creates an internal model of how objects physically move in typical environments. For example, when a soda can disappears inside the refrigerator when the door shuts, the machine remembers it still exists because it reappears when the refrigerator door reopens.
This work could expand the perception capabilities of home robots. Indoors, things become hidden from view all the time. Hence, robots need to interpret their surroundings intelligently. The “soda can inside the refrigerator” situation is one of many examples. Still, it is easy to see how any application that uses vision will benefit if robots can draw upon their memory and object-permanence reasoning skills to keep track of both objects and humans as they move around the house.
A truly useful home robot should have a variety of skills and be able to work in an unstructured environment. The robots will also need to know how to identify a task and in which order it must complete the subtasks necessary.