Request for Information on End-Effector Pose and Camera Parameters

#2
by DeNoise23 - opened

Hi, thank you for releasing this great work — the dataset looks very high-quality and extremely useful.

While exploring the provided data, I noticed that all states and actions seem to be represented as joint angles, and I couldn’t find any recorded end-effector poses. May I ask whether the end-effector pose data was recorded? If not, would it be possible to share the URDF version used during data collection? With that, I should be able to compute the poses via forward kinematics.

In addition, I didn’t find information about the camera intrinsics or extrinsics in the current release. Were these parameters recorded? If not, would it be possible to share the simulation script or configuration files used for data generation, which should contain the camera setup?

Thank you very much for your time and for making this dataset available.

Dear DeNoise23,

Thanks for paying attention to our work! I will answer your two questions sequentially.

For the first question, regarding the end-effector pose data. For franka robot, I have provided the end-effector pose in proprio and actions. For the split_aloha, lift2 and genie-1 robot, I will attach the urdf into the huggingface files.

For the second question, I will arange the camera intrinsics for each robot's different views, however, we don't save the camera extrinsics in the LEROBOT data format.

Notably, piper100 is the arm of split_aloha, r5a is the arm of lift2, g1_120s is the Genie-1 robot.

Thank you very much for your detailed and helpful reply. We really appreciate you sharing the URDF files — they are extremely useful for recovering the end-effector poses via forward kinematics.🙏

Regarding the camera parameters, we understand the current data format constraints. However, we were a bit disappointed to learn that the camera extrinsics were not recorded, as accurate camera intrinsics and extrinsics are crucial for our vision-based robotics and policy learning tasks. Given the exceptionally high quality of your dataset, we would very much like to use it in our work and would prefer not to exclude it solely due to the lack of camera extrinsics.

May I kindly ask whether it would be possible in the future to release:

  1. an updated version of the dataset that includes camera intrinsic and extrinsic parameters, or

  2. the simulation scripts or data collection code used to generate the dataset.

Either of these would greatly enhance the usability and impact of this already excellent dataset for the broader community.✨

Thank you again for your time and for making this valuable dataset publicly available.

Dear DeNoise23,

 I will update a dataset that includes intrinsics and extrinsics parameters in this week. Do you need extrinsics from the wrist camera ? 

For the simulation code, we plan to release in December.

That’s great news, thank you so much!!!

Yes, we do need the extrinsics for the wrist-mounted camera as well. Having those would be really helpful for our experiments.

Thanks again for taking the time to update the dataset and for planning to release the simulation code — we really appreciate it!

Sign up or log in to comment