Dataset Viewer (First 5GB)
Auto-converted to Parquet
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 364.85 MiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GeoBench (GeoVista Bench)

GeoBench is a collection of real-world panoramas with rich metadata for evaluating geolocation models. Each sample corresponds to one panorama identified by its uid and includes both the original high-resolution imagery and a lightweight preview for rapid inspection.

Dataset Structure

  • id: unique identifier (same as uid from the original data).
  • raw_image_path: relative path (within this repo) to the source panorama under raw_image/<uid>/.
  • preview: compressed JPEG preview (<=1M pixels) under preview_image/<uid>/. This is used by HF Dataset Viewer.
  • metadata: JSON object storing capture timestamp, location, pano_id, city, and other attributes. Downstream users can parse it to obtain lat/lng, city names, multi-level location tags, etc.
  • data_type: string describing the imagery type. If absent in metadata it defaults to panorama.

All samples are stored in a Hugging Face-compatible parquet file at data/<split>/data-00000-of-00001.parquet, with additional metadata in dataset_info.json.

Working with GeoBench

  1. Clone/download this folder (or pull it via huggingface_hub).
  2. Load the parquet file using Python:
    from datasets import load_dataset
    
    ds = load_dataset('path/to/this/folder', split='train')
    sample = ds[0]
    ``
    `sample["preview"]` loads directly as a PIL image; `sample["raw_image_path"]` points to the higher-quality file if needed.
    
  3. Use the metadata to drive evaluation logic, e.g., compute city-level accuracy, filter by data_type, or inspect specific regions.

Notes

  • Raw panoramas retain original filenames to preserve provenance.
  • Preview images are resized to reduce storage costs while remaining representative of the scene.
  • Ensure you comply with the dataset’s license (dataset_info.json) when sharing or modifying derived works.

Related Resources

•	GeoVista model (RL-trained agentic VLM used in the paper):

https://huggingface.co/LibraTree/GeoVista • GeoVista-Bench (previewable variant): A companion dataset with resized JPEG previews intended to make image preview easier in the Hugging Face dataset viewer: https://huggingface.co/datasets/LibraTree/GeoVistaBench (Same underlying benchmark; different packaging / image formats.) • Paper page on Hugging Face: https://huggingface.co/papers/2511.15705

Citation

@misc{wang2025geovistawebaugmentedagenticvisual,
      title        = {GeoVista: Web-Augmented Agentic Visual Reasoning for Geolocalization},
      author       = {Yikun Wang and Zuyan Liu and Ziyi Wang and Pengfei Liu and Han Hu and Yongming Rao},
      year         = {2025},
      eprint       = {2511.15705},
      archivePrefix= {arXiv},
      primaryClass = {cs.CV},
      url          = {https://arxiv.org/abs/2511.15705},
}
Downloads last month
107