Here you can find data we have collected for the objects used in the
Amazon Picking Challenge. The data has been collected and processed using
the same system described in the ICRA 2014 publication
A Large-Scale 3D Database of Object Instances
and the ICRA 2015 publication
Range Sensor and Silhouette Fusion for High-Quality 3D Scanning.
Specifically, for each object, we provide:
Note that some objects, depending on their properties (e.g. transparency) may not have complete point clouds or meshes. We include them because the raw data and/or the partial meshes may still be useful. Specifically, first_years_take_and_toss_straw_cups, munchkin_white_hot_duck_bath_toy, and safety_works_safety_glasses have significantly below-average quality models.
You can access the data via this Google Drive link.
To immediately get started loading Kinbodies into OpenRAVE, you can use the files in 'kinbody.tgz' for each object.
For each object, we provide three types of .tgz files. The data contained in each is described below.
The RGB-D sensors have names of the form "NP[1-5]." The RGB cameras have names of the form "N[1-5]." The cameras are arranged in a quarter-circular arc, and the overhead position is labeled 5, and the lowest-to-the-ground position is labeled 1. The names are shown in the image below. These names are used everywhere, including filenames and in calibration information.
The calibration file contains the intrinsic matrix for each Canon RGB camera, and for the RGB and IR/depth cameras for each Primesense Carmine.
One of the overhead cameras is designated as the "reference camera." Currently, the reference camera for all objects is NP5. Relative transformations (given as homogeneous transformation matrices) between all cameras and the reference camera are provided in each 'calibration.h5' file. Note that different objects may have different calibration information. Using the relative transforms, one may transform points from any camera's frame into any other camera's frame.
The pose files (e.g. "poses/NP5_60_pose.h5") contain the transformation matrix from the reference camera to the turntable frame for each turntable position.
By combining the calibration relative transformations with the reference-to-table transformations, one can obtain pose information for every image or point cloud.
You can access the data via this Google Drive link.