Talk Video | ||
AbstractDespite the rich information provided by sensors such as the Microsoft Kinect in the robotic perception setting, the problem of detecting object instances remains unsolved, even in the tabletop setting, where segmentation is greatly simplified. Existing object detection systems often focus on textured objects, for which local feature descriptors can be used to reliably obtain correspondences between different views of the same object. We examine the benefits of dense feature extraction and multimodal features for improving the accuracy and robustness of an instance recognition system. By combining multiple modalities and blending their scores through an ensemble-based method in order to generate our final object hypotheses, we obtain significant improvements over previously published results on two RGB-D datasets. On the Challenge dataset, our method results in only one missed detection (achieving 100% precision and 99.77% recall). On the Willow dataset, we also make significant gains on the prior state of the art (achieving 98.28% precision and 87.78% recall), resulting in an increase in F-score from 0.8092 to 0.9273. |
||
DatasetsThe Challenge and Willow datasets described below were downloaded from here.
Challenge corresponds to { |
||
Dataset AnnotationsChallenge DatasetWe found several errors in the original ground truth data provided:
Willow DatasetUnfortunately, to the best of our knowledge, there is no ground truth pose information for the Willow dataset. The results reported in our paper were obtained assuming that the objects given in each test case are visible in all frames for that test (which is often not the case). Download the Willow ground truth used for the paper here: [.tar.gz] [.zip] We noted many instances where objects in a given test frame scene were fully occluded by manual inspection, and hence modified the ground truth files accordingly. Download the fixed Willow ground truth files (Willow-Vis): [.tar.gz] [.zip] |
Detections & MistakesChallenge DatasetFor all our detections, we show our color models projected on the test image in the predicted pose for each test frame. We also show the segmentations and IDs of the detected or missing objects. Finally, we provide the scores computed for each individual segmentation cluster. Key for scores in the JSON files:
Willow DatasetAs described in our paper, the Willow dataset contains many highly occluded and non-textured views of objects as well as imposter objects. Though we attain good precision, it is much more difficult to attain good recall. | |
CodeOur SIFT feature extraction code can be found here. |