For self-driving cars to work, they've got to do it all, and that includes navigating city streets where one of the biggest issues isn't just other cars, it's pedestrians. Humans have a sense of what a person will do based on their movements. Computers, not so much. Toyota has an idea how to fix that.
In , Toyota describes an image processor that combines cameras to track "people in realist street conditions".
What kind of conditions?
Say someone is walking up to a street corner. Based on how they're moving, the combination of cameras and the computers they feed information into could determine whether they're going to stop on the sidewalk or continue into the crosswalk. Using that information, Toyota can gauge the pedestrian's intentions based on their posture and movement, giving the car the go-ahead or telling it to stop.
The filing describes a three-stage process, with the pose of the pedestrian rendered in 3D using a combination of two, 2D tracking systems – one for the pedestrian and another for perspective. Toyota explains it can use standard monocular camera tech and have it processed through a computer rather than using a more complicated and expensive stereoscopic camera.
The technology could easily be used for next-generation emergency braking assist in cities and neighborhoods, and the jump from autonomous cars wouldn't be far behind.
Hat tip to