AL-GR's picture
Update README.md
74e33ef verified
metadata
language:
  - en
  - zh
license: apache-2.0
pretty_name: AL-GR Raw Sequences πŸ“œ
tags:
  - sequential-recommendation
  - raw-data
  - anonymized
  - e-commerce
  - next-item-prediction
  - generative-retrieval
  - semantic-identifiers
task_categories:
  - text-generation
  - text-retrieval

AL-GR/Origin-Sequence-Data: Raw User Behavior Sequences πŸ“œ

About the Dataset

Each row in this dataset (Origin-Sequence-Data) represents a step in a user's journey, consisting of a sequence of previously interacted items (user_history) and the next item they interacted with (target_item). All item IDs have been anonymized into short, unique strings.

This dataset is ideal for:

  • πŸ§‘β€πŸ”¬ Researchers who want to design their own data processing or prompting strategies for generative retrieval.
  • πŸ“ˆ Training and evaluating traditional sequential recommendation models (e.g., GRU4Rec, SASRec, etc.).
  • πŸ”Ž Understanding the source data from which the main AL-GR generative dataset was built.

πŸš€ Sample Usage

The data is structured in multiple folders (s1_splits, s2_splits, etc.), which is a non-standard format for the datasets library. To make loading seamless, a loading script is required.

Step 1: Create the Loading Script

Create a Python file named origin-sequence-data.py in your local directory and paste the following code into it.

import csv
import datasets
import glob

class OriginSequenceData(datasets.GeneratorBasedBuilder):
    """A loader for the AL-GR Raw User Behavior Sequences."""

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features({
                "user_history": datasets.Value("string"),
                "target_item": datasets.Value("string"),
            }),
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        # Data is already in the repository, so we point to the root.
        repo_path = dl_manager.manual_dir

        return [
            datasets.SplitGenerator(
                name="s1",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s1_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="s2",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s2_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="s3",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/s3_splits/*.csv"))},
            ),
            datasets.SplitGenerator(
                name="test",
                gen_kwargs={"filepaths": sorted(glob.glob(f"{repo_path}/test/*.csv"))},
            ),
        ]

    def _generate_examples(self, filepaths):
        """Yields examples from the data files."""
        key = 0
        for filepath in filepaths:
            with open(filepath, "r", encoding="utf-8") as f:
                # Assuming the CSV has headers: 'user_history', 'target_item'
                # If not, you might need to use csv.reader and access by index.
                reader = csv.DictReader(f)
                for row in reader:
                    yield key, {
                        "user_history": row["user_history"],
                        "target_item": row["target_item"],
                    }
                    key += 1

Step 2: Upload the Script

Upload the origin-sequence-data.py file to the root directory of this dataset repository on the Hugging Face Hub.

Step 3: Load the Dataset with One Command!

Once the script is uploaded, you (and anyone else) can load the entire dataset effortlessly:

from datasets import load_dataset

# The loading script will be automatically detected and executed.
dataset = load_dataset("AL-GR/Origin-Sequence-Data")

# Access different splits
print("Sample from s1 split:")
print(dataset['s1'][0])

print("
Sample from test split:")
print(dataset['test'][0])

πŸ—οΈ Dataset Structure

Data Fields

  • user_history (string) πŸ•’: A space-separated sequence of anonymized item IDs representing the user's past interactions.
  • target_item (string) 🎯: The single anonymized item ID that the user interacted with next.

Data Splits

The dataset is partitioned into four main parts, stored in separate folders:

  • s1_splits, s2_splits, s3_splits: Three chronological training splits. This is useful for time-aware training and evaluation, allowing models to be trained on older data and tested on newer data.
  • test: A dedicated test set for final model evaluation.

πŸ”— Relationship to AL-GR

This dataset is the direct precursor to the main AL-GR generative dataset. The transformation is as follows:

  • Origin-Sequence-Data (This dataset):

    • user_history: "AdPxq 6Vf1Re WkQqK..."
    • target_item: "ECZSq"
  • AL-GR (Generative dataset):

    • system: "You are a recommendation system..."
    • user: "The current user's historical behavior is as follows: C...C..." (IDs might be re-mapped)
    • answer: "C..." (The target item, re-mapped)

This dataset provides the raw material for anyone wishing to replicate or create variants of the AL-GR prompt format.

✍️ Citation

If you use this dataset in your research, please cite:

πŸ“œ License

This dataset is licensed under the Apache License 2.0.