license: apache-2.0
task_categories:
- image-classification
- object-detection
- image-to-text
- question-answering
- image-text-to-text
language: en
tags:
- underwater
- multimodal
- LMM
- instruction-following
- scene-understanding
size_categories:
- 1M<n<10M
NautData
Paper | Project Page | Code
NautData is a large-scale underwater instruction-following dataset containing 1.45 million image-text pairs. It was constructed to bridge the gap in large-scale underwater multi-task instruction-tuning datasets, which are crucial for advancing underwater scene understanding methods. The dataset enables the development and thorough evaluation of underwater Large Multimodal Models (LMMs).
This dataset was introduced in the paper NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding. The paper also proposes the NAUTILUS model, which incorporates a Vision Feature Enhancement (VFE) module to explicitly restore clear underwater information and improve robustness against image degradation.
This Hugging Face repository (Wang017/NautData) specifically contains the processed images that form part of the NautData dataset. For the corresponding instruction-tuning annotation files, please refer to the Wang017/NautData-Instruct dataset on the Hugging Face Hub.
Supported Tasks
NautData supports eight underwater scene understanding tasks across image, region, and object levels, facilitating comprehensive analysis:
- Classification: Coarse-grained and fine-grained image classification.
- Captioning: Image-level and region-level description generation.
- Grounding: Referring expression comprehension and localization.
- Detection: Object detection within underwater scenes.
- Visual Question Answering (VQA): Answering questions about images.
- Counting: Counting specific objects or entities.
Sample Usage
The following snippets, adapted from the project's GitHub repository, demonstrate how to perform single-sample inference using models trained on NautData (NAUTILUS variants). These examples illustrate how the dataset can be utilized for various underwater scene understanding tasks.
NAUTILUS(LLaVA) Inference
cd LLaVA
CUDA_VISIBLE_DEVICES=0 python scripts/inference/inference.py --model-path "path to checkpoint" --model-base "models--liuhaotian--llava-v1.5-7b" --dinov2-weight "path to dinov2" --image "path to image" --prompt "question"
# prompt default is "Describe the image"
NAUTILUS(Qwen) Inference
cd qwen-vl-finetune
CUDA_VISIBLE_DEVICES=0 python scripts/inference.py --checkpoint "path to checkpoint" --image "path to image" --prompt "question"
# prompt default is "Describe the image"
For more detailed usage, including dataset preparation, training, and evaluation, please refer to the official GitHub repository.
Citation
If you find NautData or the NAUTILUS project useful in your research, please consider citing the associated paper:
@inproceedings{xu2025nautilus,
title={NAUTILUS: A Large Multimodal Model for Underwater Scene Understanding},
author={Xu, Wei and Wang, Cheng and Liang, Dingkang and Zhao, Zongchuang and Jiang, Xingyu and Zhang, Peng and Bai, Xiang},
booktitle={Advances in Neural Information Processing Systems},
year={2025}
}