Movement Trajectory Algorithm Helps Humans and Robots Work Together

By Ruth Seeley

Last year, after MIT researchers working with auto manufacturer BMW identified an issue with robots freezing in place in anticipation of humans crossing their paths (which could lead to significant inefficiencies), the team identified a limitation in the trajectory alignment algorithms used by the robot’s motion predicting software as the cause of the problem.

While the robots, which were on rails and tasked with delivering parts between workstations, could reasonably predict where a person was headed, due to the poor time alignment, the algorithms couldn’t anticipate how long that person spent at any point along their predicted path or how long it would take for a person to stop, double back, and cross the robot’s path again.

Now, members of that same MIT team have come up with a solution: an algorithm that accurately aligns partial trajectories in real-time, allowing motion predictors to accurately anticipate the timing of a person’s motion. When they applied the new algorithm to the BMW factory floor experiments, they found that instead of freezing in place, the robot simply rolled on and was safely out of the way by the time the person walked by again.

“This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which are a core part of human motion,” says Julie Shah, associate professor of aeronautics and astronautics at MIT. “This technique is one of the many ways we’re working on robots better understanding people.”

To enable robots to predict human movements, researchers typically borrow algorithms from music and speech processing. These algorithms are designed to align two complete time series, or sets of related data, such as an audio track of musical performance and a scrolling video of that piece’s musical notation.

Researchers have used similar alignment algorithms to sync up real-time and previously recorded measurements of human motion, to predict where a person will be, say, five seconds from now. But unlike music or speech, human motion can be messy and highly variable. Even for repetitive movements, such as reaching across a table to screw in a bolt, one person may move slightly differently each time.

Existing algorithms typically take in streaming motion data, in the form of dots representing the position of a person over time, and compare the trajectory of those dots to a library of common trajectories for the given scenario. An algorithm maps a trajectory in terms of the relative distance between dots.

But the algorithms that predict trajectories based on distance alone can get easily confused in certain common situations, such as temporary stops, in which a person pauses before continuing on their path. While paused, dots representing the person’s position can bunch up in the same spot, and if you only look at the distance between points as the alignment metric, you don’t get a good idea of which point to align to.

The same goes with overlapping trajectories when a person moves back and forth along a similar path. While a person’s current position may line up with a dot on a reference trajectory, existing algorithms can’t differentiate between whether that position is part of a trajectory heading away or coming back along the same path.

As a solution, the MIT researchers devised a “partial trajectory” algorithm that aligns segments of a person’s trajectory in real-time with a library of previously collected reference trajectories. The new algorithm aligns trajectories in both distance and timing and is, therefore, able to accurately anticipate stops and overlaps in a person’s path.

The team tested the algorithm on two human motion datasets: one in which a person intermittently crossed a robot’s path in a factory setting (these data were obtained from the team’s experiments with BMW), and another in which the group previously recorded hand movements of participants reaching across a table to install a bolt that a robot would then secure by brushing sealant on the bolt.

For both datasets, the team’s algorithm was able to make better estimates of a person’s progress through a trajectory, compared with two commonly used partial trajectory alignment algorithms. Furthermore, the team found that when they integrated the alignment algorithm with their motion predictors, the robot could more accurately anticipate the timing of a person’s motion. In the factory floor scenario, for example, they found the robot was less prone to freezing in place, and instead smoothly resumed its task shortly after a person crossed its path.

While the algorithm was evaluated in the context of motion prediction, it can also be used as a preprocessing step for other techniques in the field of human-robot interaction, such as action recognition and gesture detection. The algorithm will be a key tool in enabling robots to recognize and respond to patterns of human movement and behaviors. Ultimately, this can help humans and robots work together in structured environments, such as factory settings and even, in some cases, the home.

Shah and her colleagues will present their results this month at the Robotics: Science and Systems conference in Germany.

Source:  MIT

Leave A Reply

Your email address will not be published.