Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: output_dir: null job_name: string resume: bool seed: int64 num_workers: int64 batch_size: int64 steps: int64 log_freq: int64 save_checkpoint: bool save_freq: int64 wandb: struct<enable: bool, project: string, disable_artifact: bool> dataset: struct<repo_id: string, use_imagenet_stats: bool> policy: struct<type: string, n_obs_steps: int64, repo_id: string, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.side: struct<type: string, shape: list<item: int64>>, observation.images.front: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.side: struct<mean: list<item: double>, std: list<item: double>>, observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, num_discrete_actions: int64, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, async_prefetch: bool, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: int64, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>> env: struct<type: string, robot: struct<type: string, port: string, urdf_path: string, target_frame_name: string, cameras: struct<front: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>, wrist: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>>, end_effector_bounds: struct<min: list<item: double>, max: list<item: double>>, max_gripper_pos: int64>, teleop: struct<type: string, use_gripper: bool>, wrapper: struct<display_cameras: bool, add_joint_velocity_to_observation: bool, add_current_to_observation: bool, add_ee_pose_to_observation: bool, crop_params_dict: struct<observation.images.front: list<item: int64>, observation.images.wrist: list<item: int64>>, resize_size: list<item: int64>, control_time_s: double, use_gripper: bool, gripper_quantization_threshold: null, gripper_penalty: double, gripper_penalty_in_reward: bool, fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_mode: string>, name: string, mode: null, repo_id: null, dataset_root: null, task: string, num_episodes: int64, episode: int64, pretrained_policy_name_or_path: null, device: string, push_to_hub: bool, fps: int64, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>> vs output_dir: null job_name: string resume: bool seed: int64 num_workers: int64 batch_size: int64 steps: int64 log_freq: int64 save_checkpoint: bool save_freq: int64 wandb: struct<enable: bool, project: string, disable_artifact: bool> dataset: struct<repo_id: string, use_imagenet_stats: bool> policy: struct<type: string, n_obs_steps: int64, repo_id: string, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.images.wrist: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>>, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, num_discrete_actions: int64, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: double, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>> env: struct<type: string, wrapper: struct<gripper_penalty: double, display_cameras: bool, add_joint_velocity_to_observation: bool, add_ee_pose_to_observation: bool, crop_params_dict: struct<observation.images.front: list<item: int64>, observation.images.wrist: list<item: int64>>, resize_size: list<item: int64>, control_time_s: double, fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_mode: string>, name: string, mode: null, repo_id: string, dataset_root: null, task: string, num_episodes: int64, episode: int64, pretrained_policy_name_or_path: null, device: string, push_to_hub: bool, fps: int64, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>, reward_classifier_pretrained_path: null> Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 559, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: output_dir: null job_name: string resume: bool seed: int64 num_workers: int64 batch_size: int64 steps: int64 log_freq: int64 save_checkpoint: bool save_freq: int64 wandb: struct<enable: bool, project: string, disable_artifact: bool> dataset: struct<repo_id: string, use_imagenet_stats: bool> policy: struct<type: string, n_obs_steps: int64, repo_id: string, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.side: struct<type: string, shape: list<item: int64>>, observation.images.front: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.side: struct<mean: list<item: double>, std: list<item: double>>, observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>, action: struct<min: list<item: double>, max: list<item: double>>>, num_discrete_actions: int64, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, async_prefetch: bool, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: int64, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>> env: struct<type: string, robot: struct<type: string, port: string, urdf_path: string, target_frame_name: string, cameras: struct<front: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>, wrist: struct<type: string, index_or_path: int64, height: int64, width: int64, fps: int64>>, end_effector_bounds: struct<min: list<item: double>, max: list<item: double>>, max_gripper_pos: int64>, teleop: struct<type: string, use_gripper: bool>, wrapper: struct<display_cameras: bool, add_joint_velocity_to_observation: bool, add_current_to_observation: bool, add_ee_pose_to_observation: bool, crop_params_dict: struct<observation.images.front: list<item: int64>, observation.images.wrist: list<item: int64>>, resize_size: list<item: int64>, control_time_s: double, use_gripper: bool, gripper_quantization_threshold: null, gripper_penalty: double, gripper_penalty_in_reward: bool, fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_mode: string>, name: string, mode: null, repo_id: null, dataset_root: null, task: string, num_episodes: int64, episode: int64, pretrained_policy_name_or_path: null, device: string, push_to_hub: bool, fps: int64, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>> vs output_dir: null job_name: string resume: bool seed: int64 num_workers: int64 batch_size: int64 steps: int64 log_freq: int64 save_checkpoint: bool save_freq: int64 wandb: struct<enable: bool, project: string, disable_artifact: bool> dataset: struct<repo_id: string, use_imagenet_stats: bool> policy: struct<type: string, n_obs_steps: int64, repo_id: string, normalization_mapping: struct<VISUAL: string, STATE: string, ENV: string, ACTION: string>, input_features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>>, output_features: struct<action: struct<type: string, shape: list<item: int64>>>, device: string, use_amp: bool, dataset_stats: struct<observation.images.front: struct<mean: list<item: double>, std: list<item: double>>, observation.images.wrist: struct<mean: list<item: double>, std: list<item: double>>, observation.state: struct<min: list<item: double>, max: list<item: double>>>, storage_device: string, vision_encoder_name: string, freeze_vision_encoder: bool, image_encoder_hidden_dim: int64, shared_encoder: bool, online_steps: int64, online_env_seed: int64, online_buffer_capacity: int64, offline_buffer_capacity: int64, online_step_before_learning: int64, policy_update_freq: int64, discount: double, temperature_init: double, num_critics: int64, num_subsample_critics: null, critic_lr: double, actor_lr: double, temperature_lr: double, critic_target_update_weight: double, utd_ratio: int64, state_encoder_hidden_dim: int64, latent_dim: int64, target_entropy: null, use_backup_entropy: bool, grad_clip_norm: double, num_discrete_actions: int64, critic_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool, final_activation: null>, actor_network_kwargs: struct<hidden_dims: list<item: int64>, activate_final: bool>, policy_kwargs: struct<use_tanh_squash: bool, std_min: double, std_max: int64, init_final: double>, actor_learner_config: struct<learner_host: string, learner_port: int64, policy_parameters_push_frequency: int64>, concurrency: struct<actor: string, learner: string>> env: struct<type: string, wrapper: struct<gripper_penalty: double, display_cameras: bool, add_joint_velocity_to_observation: bool, add_ee_pose_to_observation: bool, crop_params_dict: struct<observation.images.front: list<item: int64>, observation.images.wrist: list<item: int64>>, resize_size: list<item: int64>, control_time_s: double, fixed_reset_joint_positions: list<item: double>, reset_time_s: double, control_mode: string>, name: string, mode: null, repo_id: string, dataset_root: null, task: string, num_episodes: int64, episode: int64, pretrained_policy_name_or_path: null, device: string, push_to_hub: bool, fps: int64, features: struct<observation.images.front: struct<type: string, shape: list<item: int64>>, observation.images.wrist: struct<type: string, shape: list<item: int64>>, observation.state: struct<type: string, shape: list<item: int64>>, action: struct<type: string, shape: list<item: int64>>>, features_map: struct<observation.images.front: string, observation.images.wrist: string, observation.state: string, action: string>, reward_classifier_pretrained_path: null>
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
A repository that contains example json files that can be used for different applications of the LeRobot code base.
Current available configs:
env_config_so100.json
: real robot environment configuration to be used to teleoperate, record dataset and replay a dataset on the real robot with thelerobot/scripts/rl/gym_manipulator.py
script.train_config_hilserl_so100.json
: training config on the real robot with the HILSerl RL framework in LeRobot.gym_hil_env.json
: simulated environment configuration for the gym_hil mujoco based env.train_gym_hil_env.json
: training configuration for gym_hil with the HILSerl RL framework in LeRobot.reward_classifier_train_config.json
: configuration to train a reward classifier with LeRobot.
- Downloads last month
- 194