The dataset viewer should be available soon. Please retry later.
OpenCS2 — POV Renders
Browse with the OpenCS2 Viewer — every match, map and round, with all 10 player POVs synced on one timeline.
Tick-aligned Counter-Strike 2 POV training clips, rendered from
blanchon/cs2_dataset_demo. Each row is
≤1 minute of one player's perspective; ten POVs per round share the same tick clock.
Per chunk:
- Video — 1280×720 @ 32 fps, near-lossless H.264.
- Audio — per-player stereo, mixed from that player's position and orientation.
- Inputs — every tick: keys, mouse delta, view angles, fire/jump/use, weapon switches.
- World state — every tick, all 10 players: position, velocity, view, health, armor, weapon, alive flag.
Usage
Four configs. previews is the cheap default; chunks is the heavy training data.
| Config | Row | Use |
|---|---|---|
previews (default) |
low-res preview.mp4 + 1 Hz inputs/world sidecars |
browsing, sanity checks |
chunks |
path-only video.mp4 + audio.wav, embedded inputs/worlds |
training |
matches |
one per (match_id, map_name) with team/event metadata |
filtering / index |
rounds |
one per (match_id, map_name, round) with tick boundaries |
filtering / index |
Stream with datasets
from datasets import load_dataset
# Default browsing view
previews = load_dataset("blanchon/cs2_dataset_render", split="train", streaming=True)
# Training rows. Pass columns/filters at load time so the parquet reader pushes them down.
chunks = load_dataset(
"blanchon/cs2_dataset_render", "chunks",
split="train", streaming=True,
columns=["video", "audio", "inputs", "worlds", "match_id", "round", "player"],
filters=[("player", "==", 0)],
)
Query with DuckDB
INSTALL httpfs; LOAD httpfs;
-- Match index
SELECT match_id, map_name, team1, team2, event, match_date
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/manifest-*.parquet'
WHERE event ILIKE '%IEM%';
-- Long rounds only
SELECT match_id, map_name, round, round_duration_ticks
FROM 'hf://datasets/blanchon/cs2_dataset_render/index/rounds-*.parquet'
WHERE round_duration_ticks > 3000;
-- Preview rows, filter by AWP plays (cheap visual triage)
SELECT match_id, round, player, chunk_index, primary_weapon
FROM 'hf://datasets/blanchon/cs2_dataset_render/data/**/chunks-preview-*.parquet'
WHERE primary_weapon = 'AWP';
Partial download with the hf CLI
Hive partitioning lets you target a single match/map/player without touching the rest:
# Index files only (~MB, scan-friendly)
hf download blanchon/cs2_dataset_render --repo-type dataset --include "index/*.parquet"
# All previews for one match
hf download blanchon/cs2_dataset_render --repo-type dataset \
--include "data/match_id=2393343/**/chunks-preview-*.parquet" \
--include "data/match_id=2393343/**/previews/**"
# Full chunks (video+audio+parquet) for one player on one map
hf download blanchon/cs2_dataset_render --repo-type dataset \
--include "data/match_id=2393343/map_name=de_ancient/player=0/**"
Structure
Repository layout
data/
match_id=<id>/map_name=<map>/player=<0-9>/
chunks-preview-<machine>-<uuid>.parquet
chunks-full-<machine>-<uuid>.parquet
chunks/chunk_<n>/{video.mp4, audio.wav}
previews/chunk_<n>/{preview.mp4, inputs.preview.json, world.preview.jsonl}
index/
manifest-<machine>-<uuid>.parquet # one row per (match, map)
rounds-<machine>-<uuid>.parquet # one row per (match, map, round)
Hive key=value directories prune at the path level. Media files are loose; the
parquets store relative *_path columns plus an hf://... URI for the Hub
dataset viewer. Parallel render workers each write their own <machine>-<uuid>
shard so uploads never collide.
Row semantics
playeris the canonical 0-9 player index, stable for a match.spec_slotis the transient CS2 spectator slot used to capture the POV — useful only for debugging.- Recording starts at the playable round start (
freeze_end_tick) and stops at the player's death tick, or at round end for survivors. POVs in the same round can have different durations. inputsandworldsare struct-of-arrays in the chunks parquet —row["worlds"]["X"]returns the per-tick X position list for all 10 players.- Hot filter columns on
chunks:match_id,map_name,player,round,chunk_index,primary_weapon,player_side,survived_chunk,damage_taken,shots_fired,distance_traveled,weapons_used.
Creation
Built with the recorder in https://github.com/julien-blanchon/opencs2-dataset:
- Demos — pulled from
blanchon/cs2_dataset_demo. - Render — a headless CS2 + custom plugin replays each demo, captures every player POV
tick-by-tick, and streams raw frames to NVENC. Validated end-to-end:
start_tick == requested, frame count matches the tick range. - Parquet — chunks (≤1 min) sorted by
(round, chunk_index), written withrow_group_size=1,write_page_index=Trueanduse_content_defined_chunking=Trueso re-uploads of touched chunks transfer near-zero bytes via Xet. - Upload — each render worker writes its own
<machine>-<uuid>shard viaupload_large_folderwithhf_transfer+hf_xet.
Licensing & Usage
.dem source data is mirrored from HLTV; downstream use is bound by the original
tournament terms. Renders + metadata: CC-BY-4.0.
Citation
@misc{blanchon2026opencs2,
author = {Julien Blanchon},
title = {OpenCS2 Dataset},
year = {2026},
publisher = {Hugging Face},
howpublished = {\url{https://github.com/julien-blanchon/opencs2-dataset}}
}
- Downloads last month
- 833
