Dataset Viewer
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    UnicodeDecodeError
Message:      'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3422, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2187, in _head
                  return next(iter(self.iter(batch_size=n)))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2391, in iter
                  for key, example in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1904, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/csv/csv.py", line 188, in _generate_tables
                  csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/streaming.py", line 73, in wrapper
                  return function(*args, download_config=download_config, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 1193, in xpandas_read_csv
                  return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
                  return _read(filepath_or_buffer, kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 620, in _read
                  parser = TextFileReader(filepath_or_buffer, **kwds)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
                  self._engine = self._make_engine(f, self.engine)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
                  return mapping[engine](f, **self.options)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
                  self._reader = parsers.TextReader(src, **kwds)
                File "parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
                File "parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
                File "parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
                File "parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
                File "parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
              UnicodeDecodeError: 'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

This repository contains KC-MMBench, a new benchmark dataset meticulously tailored for real-world short-video scenarios, as presented in the paper "Kwai Keye-VL Technical Report". Constructed from Kuaishou short video data, KC-MMBench comprises 6 distinct datasets designed to evaluate the performance of Vision-Language Models (VLMs) like Kwai Keye-VL-8B, Qwen2.5-VL, and InternVL in comprehending dynamic, information-dense short-form videos.

For the associated code, detailed documentation, and evaluation scripts, please refer to the official Kwai Keye-VL GitHub repository.

If you want to use KC-MMbench, please download with:

git clone https://huggingface.co/datasets/Kwai-Keye/KC-MMbench

Tasks

Task Description
CPV The task of predicting product attributes in e-commerce.
Hot_Videos_Aggregation The task of determining whether multiple videos belong to the same topic.
Collection_Order The task of determining the logical order between multiple videos with the same topic.
Pornographic_Comment The task of whether short video comments contain pornographic content.
High_Like A binary classification task to determine the rate of likes of a short video.
SPU The task of determining whether two items are the same product in e-commerce.

Performance

Task Qwen2.5-VL-3B Qwen2.5-VL-7B InternVL-3-8B MiMo-VL-7B Kwai Keye-VL-8B
CPV 12.39 20.08 14.95 16.66 55.13
Hot_Videos_Aggregation 42.38 46.35 52.31 49.00 54.30
Collection_Order 36.88 59.83 64.75 78.68 84.43
Pornographic_Comment 56.61 56.08 57.14 68.25 71.96
High_Like 48.85 47.94 47.03 51.14 55.25
SPU 74.09 81.34 75.64 81.86 87.05

Usage

This section provides a quick guide on how to interact with models using the keye-vl-utils library, which is essential for processing and integrating visual language information with Keye Series Models like Kwai Keye-VL-8B.

Install keye-vl-utils

First, install the necessary utility library:

pip install keye-vl-utils

Keye-VL Inference Example

Here's an example of performing inference with a Kwai Keye-VL model, demonstrating how to prepare inputs for both image and video scenarios.

from transformers import AutoModel, AutoProcessor
from keye_vl_utils import process_vision_info

# default: Load the model on the available device(s)
model_path = "Kwai-Keye/Keye-VL-8B-Preview"

model = AutoModel.from_pretrained(
    model_path, torch_dtype="auto", device_map="auto", attn_implementation="flash_attention_2", trust_remote_code=True,
).to('cuda')

# Example messages demonstrating various input types (image, video)
messages = [
    # Image Input Examples
    [{"role": "user", "content": [{"type": "image", "image": "file:///path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
    [{"role": "user", "content": [{"type": "image", "image": "http://path/to/your/image.jpg"}, {"type": "text", "text": "Describe this image."}]}],
    [{"role": "user", "content": [{"type": "image", "image": "data:image;base64,/9j/..."}, {"type": "text", "text": "Describe this image."}]}],
    
    # Video Input Examples (most relevant for KC-MMBench)
    [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4"}, {"type": "text", "text": "Describe this video."}]}],
    [{"role": "user", "content": [{"type": "video", "video": ["file:///path/to/extracted_frame1.jpg", "file:///path/to/extracted_frame2.jpg", "file:///path/to/extracted_frame3.jpg"],}, {"type": "text", "text": "Describe this video."},],}],
    [{"role": "user", "content": [{"type": "video", "video": "file:///path/to/video1.mp4", "fps": 2.0, "resized_height": 280, "resized_width": 280}, {"type": "text", "text": "Describe this video."}]}],
]

processor = AutoProcessor.from_pretrained(model_path)
# Note: model loaded above already
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
images, videos, video_kwargs = process_vision_info(messages, return_video_kwargs=True)
inputs = processor(text=text, images=images, videos=videos, padding=True, return_tensors="pt", **video_kwargs).to("cuda")
generated_ids = model.generate(**inputs)
print(generated_ids)

Evaluation

For detailed instructions on how to evaluate models using the KC-MMBench datasets, including setup and running evaluation scripts, please refer to the evaluation/KC-MMBench/README.md file in the official Kwai Keye-VL GitHub repository.

Below is the example configuration for evaluation using VLMs on our datasets:

{
    "model": "...", # Specify your model
    "data": {
        "CPV": {
            "class": "KwaiVQADataset",
            "dataset": "CPV"
        },
        "Hot_Videos_Aggregation": {
            "class": "KwaiVQADataset",
            "dataset": "Hot_Videos_Aggregation"
        },
        "Collection_Order": {
            "class": "KwaiVQADataset",
            "dataset": "Collection_Order"
        },
        "Pornographic_Comment": {
            "class": "KwaiYORNDataset",
            "dataset": "Pornographic_Comment"
        },
        "High_like":{
            "class":"KwaiYORNDataset",
            "dataset":"High_like"
        },
        "SPU": {
            "class": "KwaiYORNDataset",
            "dataset": "SPU"
        }
    }
}
Downloads last month
1,602