Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
episode_index: int64
tasks: list<item: string>
length: int64
vs
robot_type: string
codebase_version: string
total_episodes: int64
total_frames: int64
total_tasks: int64
total_videos: int64
total_chunks: int64
chunks_size: int64
fps: int64
splits: struct<train: string>
data_path: string
video_path: string
features: struct<action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, observation.images.main: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>>
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 231, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3335, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2096, in _head
return next(iter(self.iter(batch_size=n)))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2296, in iter
for key, example in iterator:
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
for key, pa_table in self._iter_arrow():
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 520, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
episode_index: int64
tasks: list<item: string>
length: int64
vs
robot_type: string
codebase_version: string
total_episodes: int64
total_frames: int64
total_tasks: int64
total_videos: int64
total_chunks: int64
chunks_size: int64
fps: int64
splits: struct<train: string>
data_path: string
video_path: string
features: struct<action: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, observation.state: struct<dtype: string, shape: list<item: int64>, names: list<item: string>>, timestamp: struct<dtype: string, shape: list<item: int64>, names: null>, episode_index: struct<dtype: string, shape: list<item: int64>, names: null>, frame_index: struct<dtype: string, shape: list<item: int64>, names: null>, task_index: struct<dtype: string, shape: list<item: int64>, names: null>, index: struct<dtype: string, shape: list<item: int64>, names: null>, observation.images.main: struct<dtype: string, shape: list<item: int64>, names: list<item: string>, info: struct<video.fps: int64, video.codec: string, video.pix_fmt: string, video.is_depth_map: bool, has_audio: bool>>>Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
bow_emote
This dataset was generated using a phospho dev kit.
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
- Downloads last month
- 3