Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
shape: list<item: int64>
data_type: string
chunk_grid: struct<name: string, configuration: struct<chunk_shape: list<item: int64>>>
chunk_key_encoding: struct<name: string, configuration: struct<separator: string>>
fill_value: double
codecs: list<item: struct<name: string, configuration: struct<endian: string, typesize: int64, cname: string, clevel: int64, shuffle: string, blocksize: int64>>>
attributes: struct<>
zarr_format: int64
node_type: string
storage_transformers: list<item: null>
vs
attributes: struct<metadata: string, description: string, info: string>
zarr_format: int64
consolidated_metadata: null
node_type: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              shape: list<item: int64>
              data_type: string
              chunk_grid: struct<name: string, configuration: struct<chunk_shape: list<item: int64>>>
              chunk_key_encoding: struct<name: string, configuration: struct<separator: string>>
              fill_value: double
              codecs: list<item: struct<name: string, configuration: struct<endian: string, typesize: int64, cname: string, clevel: int64, shuffle: string, blocksize: int64>>>
              attributes: struct<>
              zarr_format: int64
              node_type: string
              storage_transformers: list<item: null>
              vs
              attributes: struct<metadata: string, description: string, info: string>
              zarr_format: int64
              consolidated_metadata: null
              node_type: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

EEG Dataset

This dataset was created using braindecode, a library for deep learning with EEG/MEG/ECoG signals.

Dataset Information

Property Value
Number of recordings 1
Dataset type Windowed (from Epochs object)
Number of channels 26
Sampling frequency 250 Hz
Number of windows / samples 48
Total size 0.03 MB
Storage format zarr

Usage

To load this dataset::

.. code-block:: python

    from braindecode.datasets import BaseConcatDataset

    # Load dataset from Hugging Face Hub
    dataset = BaseConcatDataset.pull_from_hub("username/dataset-name")

    # Access data
    X, y, metainfo = dataset[0]
    # X: EEG data (n_channels, n_times)
    # y: label/target
    # metainfo: window indices

Using with PyTorch DataLoader

::

from torch.utils.data import DataLoader

# Create DataLoader for training
train_loader = DataLoader(
    dataset,
    batch_size=32,
    shuffle=True,
    num_workers=4
)

# Training loop
for X, y, metainfo in train_loader:
    # X shape: [batch_size, n_channels, n_times]
    # y shape: [batch_size]
    # metainfo shape: [batch_size, 2] (start and end indices)
    # Process your batch...

Dataset Format

This dataset is stored in Zarr format, optimized for:

  • Fast random access during training (critical for PyTorch DataLoader)
  • Efficient compression with blosc
  • Cloud-native storage compatibility

For more information about braindecode, visit: https://braindecode.org

Downloads last month
54