image
imagewidth (px) 1.08k
1.08k
| label
class label 3
classes |
---|---|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
|
0AndesUI_Test_Grounding_pictures
|
AndesUI_Benchmark
The AndesUI dataset consists of two parts: a training set (train) and a test set (test), where the apps in the test set do not appear in the training set.
The dataset covers three main task types: 1.Grounding Task: Predict the bounding box (bbox) coordinates given a widget description. 2.Referring Task: Predict the corresponding widget description given a bounding box (bbox). 3.QA Task: Answer the natural language question by predicting the bbox coordinates that need to be clicked.
According to the technical report, the original test set contains:
- 8,642 Referring samples
- 7,194 Grounding samples
- 1,181 QA samples
To simplify testing, we randomly selected the following subsets:
- Referring: 1,500 samples
- Grounding: 1,500 samples
- QA: 748 samples
These subsets have been open-sourced.
Each subset is provided in both JSON and TSV formats, with image paths stored as relative paths.
Task Details
Referring Task
The JSON file includes three fields:
- description (the expected output)
- imgpath (image path)
- bbox (bounding box)
This task requires the model to predict the description of a widget based on its given bbox. Different models support different bbox formats:
- Qwen model: Prefers
[xmin, ymin, xmax, ymax]
(raw pixel coordinates) - Intern model: Prefers
[xmin, ymin, xmax, ymax]
(normalized coordinates)
Accuracy metric: A prediction (pred
) is considered correct if its longest common substring with the ground-truth description
is non-empty.
Grounding Task
The JSON file includes three fields:
- question (the widget description)
- imgpath (image path)
- bbox (ground-truth bounding box)
This task is the inverse of Referring: the model must predict the bbox of a widget given its description. Different models output different formats:
- Some models directly predict
[x_center, y_center]
(center coordinates) - Others predict
[xmin, ymin, xmax, ymax]
(full bbox)
If the model outputs a full bbox, we compute its geometric center as [x_center, y_center]
.
Accuracy metric: The predicted center must lie inside the ground-truth bbox.
QA Task
The evaluation logic for the QA task is the same as for the Grounding task.
The evaluation functionality for the AndesUI Test dataset has been integrated into the VLMEvalKit toolkit. For the three task types (Grounding, Referring, and QA), we have developed evaluation scripts respectively. These scripts should be placed in the vlmeval/dataset/GUI directory, where users can directly invoke them for automated assessment.
- Downloads last month
- 314