Enhance dataset card for NautData: Add metadata, links, description, tasks, and sample usage
Browse filesThis pull request significantly enhances the dataset card for `NautData` by:
- Adding relevant `task_categories`: `image-classification`, `object-detection`, `image-to-text`, `question-answering`, `image-text-to-text`.
- Including `language: en` and descriptive `tags`: `underwater`, `multimodal`, `LMM`, `instruction-following`, `scene-understanding`.
- Adding `size_categories: 1M<n<10M` to reflect the dataset's scale.
- Providing a comprehensive description of the dataset, its purpose, and the tasks it supports, clarifying that this repository (`Wang017/NautData`) holds the images and linking to `Wang017/NautData-Instruct` for annotations.
- Including direct links to the associated paper ([NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding](https://huggingface.co/papers/2510.27481)), the project page (https://h-embodvis.github.io/NAUTILUS/), and the GitHub repository (https://github.com/H-EmbodVis/NAUTILUS).
- Adding a "Sample Usage" section with practical Python code snippets for local inference, directly extracted from the project's GitHub README.
- Including the official BibTeX citation for proper attribution.
This makes the dataset much more informative and easier to discover and use for researchers.
|
@@ -1,3 +1,74 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-classification
|
| 5 |
+
- object-detection
|
| 6 |
+
- image-to-text
|
| 7 |
+
- question-answering
|
| 8 |
+
- image-text-to-text
|
| 9 |
+
language: en
|
| 10 |
+
tags:
|
| 11 |
+
- underwater
|
| 12 |
+
- multimodal
|
| 13 |
+
- LMM
|
| 14 |
+
- instruction-following
|
| 15 |
+
- scene-understanding
|
| 16 |
+
size_categories:
|
| 17 |
+
- 1M<n<10M
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# NautData: A Large Multimodal Dataset for Underwater Scene Understanding
|
| 21 |
+
|
| 22 |
+
[Paper](https://huggingface.co/papers/2510.27481) | [Project Page](https://h-embodvis.github.io/NAUTILUS/) | [Code](https://github.com/H-EmbodVis/NAUTILUS)
|
| 23 |
+
|
| 24 |
+
**NautData** is a large-scale underwater instruction-following dataset containing 1.45 million image-text pairs. It was constructed to bridge the gap in large-scale underwater multi-task instruction-tuning datasets, which are crucial for advancing underwater scene understanding methods. The dataset enables the development and thorough evaluation of underwater Large Multimodal Models (LMMs).
|
| 25 |
+
|
| 26 |
+
This dataset was introduced in the paper [NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding](https://huggingface.co/papers/2510.27481). The paper also proposes the NAUTILUS model, which incorporates a Vision Feature Enhancement (VFE) module to explicitly restore clear underwater information and improve robustness against image degradation.
|
| 27 |
+
|
| 28 |
+
This Hugging Face repository (`Wang017/NautData`) specifically contains the processed images that form part of the NautData dataset. For the corresponding instruction-tuning annotation files, please refer to the [Wang017/NautData-Instruct](https://huggingface.co/datasets/Wang017/NautData-Instruct) dataset on the Hugging Face Hub.
|
| 29 |
+
|
| 30 |
+
## Supported Tasks
|
| 31 |
+
|
| 32 |
+
NautData supports eight underwater scene understanding tasks across image, region, and object levels, facilitating comprehensive analysis:
|
| 33 |
+
|
| 34 |
+
* **Classification:** Coarse-grained and fine-grained image classification.
|
| 35 |
+
* **Captioning:** Image-level and region-level description generation.
|
| 36 |
+
* **Grounding:** Referring expression comprehension and localization.
|
| 37 |
+
* **Detection:** Object detection within underwater scenes.
|
| 38 |
+
* **Visual Question Answering (VQA):** Answering questions about images.
|
| 39 |
+
* **Counting:** Counting specific objects or entities.
|
| 40 |
+
|
| 41 |
+
## Sample Usage
|
| 42 |
+
|
| 43 |
+
The following snippets, adapted from the project's GitHub repository, demonstrate how to perform single-sample inference using models trained on NautData (NAUTILUS variants). These examples illustrate how the dataset can be utilized for various underwater scene understanding tasks.
|
| 44 |
+
|
| 45 |
+
### NAUTILUS(LLaVA) Inference
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
cd LLaVA
|
| 49 |
+
CUDA_VISIBLE_DEVICES=0 python scripts/inference/inference.py --model-path "path to checkpoint" --model-base "models--liuhaotian--llava-v1.5-7b" --dinov2-weight "path to dinov2" --image "path to image" --prompt "question"
|
| 50 |
+
# prompt default is "Describe the image"
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### NAUTILUS(Qwen) Inference
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
cd qwen-vl-finetune
|
| 57 |
+
CUDA_VISIBLE_DEVICES=0 python scripts/inference.py --checkpoint "path to checkpoint" --image "path to image" --prompt "question"
|
| 58 |
+
# prompt default is "Describe the image"
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
For more detailed usage, including dataset preparation, training, and evaluation, please refer to the [official GitHub repository](https://github.com/H-EmbodVis/NAUTILUS).
|
| 62 |
+
|
| 63 |
+
## Citation
|
| 64 |
+
|
| 65 |
+
If you find NautData or the NAUTILUS project useful in your research, please consider citing the associated paper:
|
| 66 |
+
|
| 67 |
+
```bibtex
|
| 68 |
+
@inproceedings{xu2025nautilus,
|
| 69 |
+
title={NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding},
|
| 70 |
+
author={Xu, Wei and Wang, Cheng and Liang, Dingkang and Zhao, Zongchuang and Jiang, Xingyu and Zhang, Peng and Bai, Xiang},
|
| 71 |
+
booktitle={Advances in Neural Information Processing Systems},
|
| 72 |
+
year={2025}
|
| 73 |
+
}
|
| 74 |
+
```
|