Abstract: "For an autonomous vehicle to work safely and effectively, it must identify its surroundings and infer the state of its environment over time. These capabilities depend on decision-making processes such as motion planning and control. To be precise the vehicle must be able to successfully track the state of objects in the environment through occlusion and predict how they will evolve in the future. Typically, tracking systems achieve this by using multiple hand-engineered stages, such as object detection, semantic classification etc., which becomes an increasingly difficult task given the increase of complexity of our surroundings.
In this talk we introduce recent advances in object classification techniques in Lidar point clouds collected from urban areas. Namely we would focus on two techniques; the first approach tracks static and dynamic objects for an autonomous vehicle operating in complex urban environments. It demonstrates that the underlying representation learned for the tracking task can be leveraged using inductive transfer to train an object detector in a data efficient manner.
The second approach proposes a complete pipeline developed
especially for distinguishing outdoor 3-D urban objects. First, it segments the point cloud into regions of ground, short objects (i.e. low foreground) and tall objects (high foreground). Then using a novel two-layer grid structure, the approach performs efficient connected component analysis on the foreground regions, for producing distinct groups of points which represent different urban objects. It follows the creation of depth-images from the object candidates, and apply an appearance based preliminary classification by a Convolutional Neural Network (CNN). Finally, this approach refines the classification with contextual features considering the possible expected scene topologies."