add annotations
Browse files- .gitattributes +5 -0
- .gitignore +5 -0
- EnvDrop/annotations.json +3 -0
- R2R/annotations.json +3 -0
- README.md +75 -0
- RxR/annotations.json +3 -0
.gitattributes
CHANGED
@@ -57,3 +57,8 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
R2R/* filter=lfs diff=lfs merge=lfs -text
|
61 |
+
RxR/* filter=lfs diff=lfs merge=lfs -text
|
62 |
+
EnvDrop/* filter=lfs diff=lfs merge=lfs -text
|
63 |
+
*.json filter=lfs diff=lfs merge=lfs -text
|
64 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
.gitignore
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
EnvDrop/images
|
2 |
+
R2R/images
|
3 |
+
RxR/images
|
4 |
+
|
5 |
+
*.tar.gz
|
EnvDrop/annotations.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76fde83c4732d0aa0ff1169ff1a4330e060872a0500c718a7f1367557ba85ce6
|
3 |
+
size 190491503
|
R2R/annotations.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a653b974c1b356aea6e810a5aaacbcb73446062234ebbb164fbe8f0478225fa4
|
3 |
+
size 5512141
|
README.md
CHANGED
@@ -1,3 +1,78 @@
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-sa-4.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pretty_name: StreamVLN
|
6 |
---
|
7 |
+
This repo contains the data for the paper **"StreamVLN: Streaming Vision-and-Language Navigation via SlowFast Context Modeling."**
|
8 |
+
|
9 |
+
[](http://arxiv.org/abs/2507.05240)
|
10 |
+
[](https://streamvln.github.io/)
|
11 |
+
[](https://www.youtube.com/watch?v=gG3mpefOBjc)
|
12 |
+
|
13 |
+
|
14 |
+
## Overview
|
15 |
+
|
16 |
+
The dataset consists of visual observations and annotations collected in the Matterport3D (MP3D) environment using the Habitat simulator. It combines data from several open-source Vision-and-Language Navigation (VLN) datasets.
|
17 |
+
|
18 |
+
## Data Collection
|
19 |
+
|
20 |
+
Data collected in this repo are from the following open-source datasets:
|
21 |
+
- [R2R-VLNCE](https://github.com/jacobkrantz/VLN-CE)
|
22 |
+
- [RxR-VLNCE](https://github.com/jacobkrantz/VLN-CE)
|
23 |
+
- [R2R-EnvDrop](https://github.com/airsplay/R2R-EnvDrop)
|
24 |
+
|
25 |
+
To get actions and observations, we enable a `ShortestPathFollower` agent in the Habitat simulator to follow the subgoals and collect rgb observations along the path. The data is collected across the Matterport3D (MP3D) scenes.
|
26 |
+
|
27 |
+
## Dataset Description
|
28 |
+
|
29 |
+
### Dataset Structure
|
30 |
+
|
31 |
+
After extracting `images.tar.gz`, the dataset has the following structure:
|
32 |
+
|
33 |
+
```shell
|
34 |
+
StreamVLN-Dataset/
|
35 |
+
βββ EnvDrop/
|
36 |
+
β βββ annotations.json
|
37 |
+
βββ R2R/
|
38 |
+
β βββ images/
|
39 |
+
β β βββ 1LXtFkjw3qL_r2r_000087/
|
40 |
+
β β β βββ rgb/
|
41 |
+
β β β βββ 000.jpg
|
42 |
+
β β β βββ 001.jpg
|
43 |
+
β β β βββ ...
|
44 |
+
β β βββ 1LXtFkjw3qL_r2r_000099/
|
45 |
+
β β βββ 1LXtFkjw3qL_r2r_000129/
|
46 |
+
β β βββ ...
|
47 |
+
β βββ annotations.json
|
48 |
+
βββ RxR/
|
49 |
+
βββ images/
|
50 |
+
βββ annotations.json
|
51 |
+
|
52 |
+
```
|
53 |
+
|
54 |
+
### Contents
|
55 |
+
|
56 |
+
`images/`: The folder contains the rgb observations collected from Habitat simulator.
|
57 |
+
|
58 |
+
`annotations.json`: The file contain the navigation instructions and discrete actions sequence from Habitat Simulator for each dataset. The structure of annotation for each episode is as follows:
|
59 |
+
|
60 |
+
```python
|
61 |
+
{
|
62 |
+
"id": (int) Identifier for the episode,
|
63 |
+
"video": (str) Video ID to identify the relative path to the directory which contains the episode,
|
64 |
+
"instruction": (list[str]) Navigation instructions,
|
65 |
+
"actions": (list[int]) Discrete actions sequence in Habitat simulator,
|
66 |
+
# 1 = MoveForward (25cm)
|
67 |
+
# 2 = TurnLeft (15Β°)
|
68 |
+
# 3 = TurnRight (15Β°)
|
69 |
+
# -1 = Dummy
|
70 |
+
# 0 = Stop (omitted in annotations)
|
71 |
+
}
|
72 |
+
```
|
73 |
+
|
74 |
+
Each episode in the `annotations.json` file corresponds to a folder in the `images/` directory, where the folder name is included in the `video` ID. The rgb images are stored in the `rgb/` subdirectory of each episode folder. Length of the `actions` list corresponds to the number of rgb images in the episode to ensure observation-action data pairs.
|
75 |
+
|
76 |
+
|
77 |
+
## Note
|
78 |
+
- For **EnvDrop**, **only annotations** are provided due to the large number of episodes. RGB images can be rendered using the Habitat simulator following the collection method described above.
|
RxR/annotations.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bc4652aa0d70edbb4933b47ad10ec03c3553c38c11531b130d4d7178b377eaef
|
3 |
+
size 21029266
|