Download the source from https://github.com/joschu/rapprentice
Dependencies:
Build procedure:
The training pipeline is illustrated below.
File formats:
See the sampledata directory for examples of these formats.
You’ll presumably collect multiple runs of the whole task. Then you run a script to generate an hdf5 file that aggregates all these demonstrations, which are broken into segments.
To see an example of how to run the data processing scripts, see the script example_pipeline/overhand.py, which processes an example dataset, which contains demonstrations of tying an overhand knot in rope. To run the script, you’ll need to download the sample data with scripts/download_sampledata.py.
./do_task.py h5file
You can run this program in various simulation configurations that let you test your algorithm without using the robot.
Various other scripts are included in the scripts directory:
PR2.py is set up so you can send commands to multiple bodyparts simultaneously. So most of the commands, like goto_joint_positions are non-blocking. If you want to wait until all commands are done, do pr2.join_all().
Start teleop and mannequin. Joystick (default) teleop is recommended.
Move the robot’s arms to check that mannequin is working. It should move but not too easily. Restart mannequin if needed. It should always be the last thing to be started.
Make a directory somewhere to store the demos.
Once recording begins, mark the start of each segment with a look and a start. The transformation will be based on the state at time of look.
Manually move the robot’s arms as desired, and use joystick to open/close grippers. When finished with a segment, press stop. Repeart look, start, stop cycle for each segment or configuration. Try to use robot friendly motions.
Ctrl-c to interrupt script when finished. Save the file. If one wishes to start over and discard the file, tell it to not save the file, and it may save the files anyway, in which case manually delete them from the folder.
If multiple demonstrations have been recorded, edit/check the yaml master file to include paths to all the demonstrations to have them be merged into one h5 file.
If error about stamps.txt being not found, the script may not have the correct path to images folder. If images folder is named name0#1, rename it to name0.
If error about rgb being None, edit the script and change the way it constructs rgb names to match the image names. If files are named 00025.jpg, use %05d.jpg for example.
View h5 file with python h5py if needed. Optionally merge it with another h5 file with other segments with h5py (.copy()). See h5py docs for more info. This method of merging multiple demonstrations is used if the first method fails.
Make sure mannequin and teleop are stopped.
Check that the Kinect is still aimed properly.