Dataset Viewer
data_source
stringclasses 1
value | prompt
listlengths 2
2
| images
images listlengths 1
1
| ability
stringclasses 253
values | env_name
stringclasses 1
value | reward_model
dict | extra_info
dict |
---|---|---|---|---|---|---|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of dishwasher in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_dishwasher
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00494004_dishwasher_4",
"object_type": "dishwasher",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch10/objects365_v1_00494004.jpg",
"question": "Can you count the number of dishwasher in this image?",
"index": "object_counting_dishwasher_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many lantern in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_lantern
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00096935_lantern_5",
"object_type": "lantern",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 512,
"height": 766
},
"original_image_path": "counting/objects365_counting/images/v1/patch2/objects365_v1_00096935.jpg",
"question": "How many lantern in this image?",
"index": "object_counting_lantern_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of tennis in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tennis
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01305011_tennis_3",
"object_type": "tennis",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 680
},
"original_image_path": "counting/objects365_counting/images/v2/patch26/objects365_v2_01305011.jpg",
"question": "Please tell me the count of tennis in this image?",
"index": "object_counting_tennis_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of elephant in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_elephant
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01194181_elephant_0",
"object_type": "elephant",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 731
},
"original_image_path": "counting/objects365_counting/images/v2/patch23/objects365_v2_01194181.jpg",
"question": "Can you count the number of elephant in this image?",
"index": "object_counting_elephant_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of pillow in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_pillow
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01114460_pillow_5",
"object_type": "pillow",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 375,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch21/objects365_v2_01114460.jpg",
"question": "What's the number of pillow in this image?",
"index": "object_counting_pillow_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of fire truck in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_fire
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00619997_fire_0",
"object_type": "fire",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 640,
"height": 486
},
"original_image_path": "counting/objects365_counting/images/v1/patch13/objects365_v1_00619997.jpg",
"question": "Can you count the number of fire truck in this image?",
"index": "object_counting_fire_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many tomato can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tomato
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00451050_tomato_2",
"object_type": "tomato",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 769,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch9/objects365_v1_00451050.jpg",
"question": "How many tomato can you see in this image?",
"index": "object_counting_tomato_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many crosswalk sign in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_crosswalk
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00387195_crosswalk_2",
"object_type": "crosswalk",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch8/objects365_v1_00387195.jpg",
"question": "How many crosswalk sign in this image?",
"index": "object_counting_crosswalk_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of mask in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_mask
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01847290_mask_4",
"object_type": "mask",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 332
},
"original_image_path": "counting/objects365_counting/images/v2/patch39/objects365_v2_01847290.jpg",
"question": "Please tell me the count of mask in this image?",
"index": "object_counting_mask_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of recorder in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_recorder
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00124172_recorder_2",
"object_type": "recorder",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 641,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch2/objects365_v1_00124172.jpg",
"question": "Please tell me the count of recorder in this image?",
"index": "object_counting_recorder_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of extension cord in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_extension
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00116150_extension_5",
"object_type": "extension",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch2/objects365_v1_00116150.jpg",
"question": "Can you count the number of extension cord in this image?",
"index": "object_counting_extension_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of napkin in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_napkin
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00414364_napkin_8",
"object_type": "napkin",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 773,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch8/objects365_v1_00414364.jpg",
"question": "What's the number of napkin in this image?",
"index": "object_counting_napkin_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of scale in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_scale
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01097845_scale_0",
"object_type": "scale",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 333
},
"original_image_path": "counting/objects365_counting/images/v2/patch20/objects365_v2_01097845.jpg",
"question": "Can you count the number of scale in this image?",
"index": "object_counting_scale_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many megaphone in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_megaphone
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01149191_megaphone_3",
"object_type": "megaphone",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 1024
},
"original_image_path": "counting/objects365_counting/images/v2/patch22/objects365_v2_01149191.jpg",
"question": "How many megaphone in this image?",
"index": "object_counting_megaphone_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many lobster can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_lobster
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_02195937_lobster_2",
"object_type": "lobster",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 338
},
"original_image_path": "counting/objects365_counting/images/v2/patch48/objects365_v2_02195937.jpg",
"question": "How many lobster can you see in this image?",
"index": "object_counting_lobster_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many tennis in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tennis
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00103063_tennis_4",
"object_type": "tennis",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch2/objects365_v1_00103063.jpg",
"question": "How many tennis in this image?",
"index": "object_counting_tennis_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of billards in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_billards
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01024284_billards_0",
"object_type": "billards",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch19/objects365_v2_01024284.jpg",
"question": "<image>\nWhat's the number of billards in this image?",
"index": "object_counting_billards_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many snowboard in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_snowboard
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01838628_snowboard_4",
"object_type": "snowboard",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 333
},
"original_image_path": "counting/objects365_counting/images/v2/patch39/objects365_v2_01838628.jpg",
"question": "How many snowboard in this image?",
"index": "object_counting_snowboard_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of book in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_book
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01104471_book_2",
"object_type": "book",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch21/objects365_v2_01104471.jpg",
"question": "Can you count the number of book in this image?",
"index": "object_counting_book_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of crane in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_crane
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01949591_crane_4",
"object_type": "crane",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 333,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch42/objects365_v2_01949591.jpg",
"question": "What's the number of crane in this image?",
"index": "object_counting_crane_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of scooter in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_scooter
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01316954_scooter_5",
"object_type": "scooter",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 332,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch26/objects365_v2_01316954.jpg",
"question": "What's the number of scooter in this image?",
"index": "object_counting_scooter_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many dumbbell in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_dumbbell
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00023835_dumbbell_2",
"object_type": "dumbbell",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 767,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch0/objects365_v1_00023835.jpg",
"question": "How many dumbbell in this image?",
"index": "object_counting_dumbbell_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of dog in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_dog
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00716193_dog_0",
"object_type": "dog",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch15/objects365_v1_00716193.jpg",
"question": "<image>\nWhat's the number of dog in this image?",
"index": "object_counting_dog_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of candle in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_candle
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01849869_candle_2",
"object_type": "candle",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 383
},
"original_image_path": "counting/objects365_counting/images/v2/patch39/objects365_v2_01849869.jpg",
"question": "Can you count the number of candle in this image?",
"index": "object_counting_candle_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of key in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_key
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01547095_key_5",
"object_type": "key",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 281
},
"original_image_path": "counting/objects365_counting/images/v2/patch32/objects365_v2_01547095.jpg",
"question": "Please tell me the count of key in this image?",
"index": "object_counting_key_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of hoverboard in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_hoverboard
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01315776_hoverboard_0",
"object_type": "hoverboard",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 334,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch26/objects365_v2_01315776.jpg",
"question": "<image>\nWhat's the number of hoverboard in this image?",
"index": "object_counting_hoverboard_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many dumbbell can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_dumbbell
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00698440_dumbbell_0",
"object_type": "dumbbell",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch15/objects365_v1_00698440.jpg",
"question": "How many dumbbell can you see in this image?",
"index": "object_counting_dumbbell_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many drum can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_drum
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01085630_drum_3",
"object_type": "drum",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1023,
"height": 575
},
"original_image_path": "counting/objects365_counting/images/v2/patch20/objects365_v2_01085630.jpg",
"question": "How many drum can you see in this image?",
"index": "object_counting_drum_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many crosswalk sign in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_crosswalk
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01533563_crosswalk_11",
"object_type": "crosswalk",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 960,
"height": 638
},
"original_image_path": "counting/objects365_counting/images/v2/patch31/objects365_v2_01533563.jpg",
"question": "How many crosswalk sign in this image?",
"index": "object_counting_crosswalk_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many saxophone in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_saxophone
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00449913_saxophone_8",
"object_type": "saxophone",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch9/objects365_v1_00449913.jpg",
"question": "How many saxophone in this image?",
"index": "object_counting_saxophone_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many cue in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_cue
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00409988_cue_6",
"object_type": "cue",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 804,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch8/objects365_v1_00409988.jpg",
"question": "How many cue in this image?",
"index": "object_counting_cue_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of poker card in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_poker
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01693076_poker_4",
"object_type": "poker",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 334
},
"original_image_path": "counting/objects365_counting/images/v2/patch35/objects365_v2_01693076.jpg",
"question": "What's the number of poker card in this image?",
"index": "object_counting_poker_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many helicopter can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_helicopter
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00426963_helicopter_5",
"object_type": "helicopter",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 769,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch9/objects365_v1_00426963.jpg",
"question": "How many helicopter can you see in this image?",
"index": "object_counting_helicopter_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of scale in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_scale
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01615825_scale_4",
"object_type": "scale",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch33/objects365_v2_01615825.jpg",
"question": "Can you count the number of scale in this image?",
"index": "object_counting_scale_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many mask can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_mask
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01560545_mask_2",
"object_type": "mask",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 685
},
"original_image_path": "counting/objects365_counting/images/v2/patch32/objects365_v2_01560545.jpg",
"question": "How many mask can you see in this image?",
"index": "object_counting_mask_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many rice in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_rice
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01265964_rice_11",
"object_type": "rice",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch25/objects365_v2_01265964.jpg",
"question": "How many rice in this image?",
"index": "object_counting_rice_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of tablet in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tablet
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00179013_tablet_3",
"object_type": "tablet",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 512,
"height": 686
},
"original_image_path": "counting/objects365_counting/images/v1/patch3/objects365_v1_00179013.jpg",
"question": "Please tell me the count of tablet in this image?",
"index": "object_counting_tablet_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of swing in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_swing
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01987502_swing_0",
"object_type": "swing",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch43/objects365_v2_01987502.jpg",
"question": "<image>\nWhat's the number of swing in this image?",
"index": "object_counting_swing_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of crosswalk sign in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_crosswalk
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00552413_crosswalk_11",
"object_type": "crosswalk",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch11/objects365_v1_00552413.jpg",
"question": "Can you count the number of crosswalk sign in this image?",
"index": "object_counting_crosswalk_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many green onion in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_green
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01687961_green_7",
"object_type": "green",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch35/objects365_v2_01687961.jpg",
"question": "How many green onion in this image?",
"index": "object_counting_green_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of motorcycle in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_motorcycle
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00354273_motorcycle_0",
"object_type": "motorcycle",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 512,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v1/patch7/objects365_v1_00354273.jpg",
"question": "<image>\nWhat's the number of motorcycle in this image?",
"index": "object_counting_motorcycle_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of deer in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_deer
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01372113_deer_10",
"object_type": "deer",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 374
},
"original_image_path": "counting/objects365_counting/images/v2/patch27/objects365_v2_01372113.jpg",
"question": "Please tell me the count of deer in this image?",
"index": "object_counting_deer_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of asparagus in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_asparagus
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01625417_asparagus_0",
"object_type": "asparagus",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 375,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch34/objects365_v2_01625417.jpg",
"question": "What's the number of asparagus in this image?",
"index": "object_counting_asparagus_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many coconut in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_coconut
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01000378_coconut_10",
"object_type": "coconut",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 683
},
"original_image_path": "counting/objects365_counting/images/v2/patch18/objects365_v2_01000378.jpg",
"question": "How many coconut in this image?",
"index": "object_counting_coconut_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of plate in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_plate
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00306926_plate_0",
"object_type": "plate",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch6/objects365_v1_00306926.jpg",
"question": "<image>\nCan you count the number of plate in this image?",
"index": "object_counting_plate_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of wine glass in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_wine
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00190409_wine_5",
"object_type": "wine",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 733,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch4/objects365_v1_00190409.jpg",
"question": "What's the number of wine glass in this image?",
"index": "object_counting_wine_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many awning in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_awning
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01131804_awning_13",
"object_type": "awning",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch21/objects365_v2_01131804.jpg",
"question": "How many awning in this image?",
"index": "object_counting_awning_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of router/modem in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_router
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01358197_router_2",
"object_type": "router",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 334
},
"original_image_path": "counting/objects365_counting/images/v2/patch27/objects365_v2_01358197.jpg",
"question": "Please tell me the count of router/modem in this image?",
"index": "object_counting_router_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of dishwasher in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_dishwasher
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00359165_dishwasher_0",
"object_type": "dishwasher",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 720,
"height": 480
},
"original_image_path": "counting/objects365_counting/images/v1/patch7/objects365_v1_00359165.jpg",
"question": "<image>\nPlease tell me the count of dishwasher in this image?",
"index": "object_counting_dishwasher_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many crab in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_crab
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01149672_crab_2",
"object_type": "crab",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 1024
},
"original_image_path": "counting/objects365_counting/images/v2/patch22/objects365_v2_01149672.jpg",
"question": "How many crab in this image?",
"index": "object_counting_crab_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of snowboard in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_snowboard
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00445073_snowboard_3",
"object_type": "snowboard",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 769,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch9/objects365_v1_00445073.jpg",
"question": "What's the number of snowboard in this image?",
"index": "object_counting_snowboard_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many cymbal can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_cymbal
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01693836_cymbal_4",
"object_type": "cymbal",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 1024
},
"original_image_path": "counting/objects365_counting/images/v2/patch35/objects365_v2_01693836.jpg",
"question": "How many cymbal can you see in this image?",
"index": "object_counting_cymbal_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of sports car in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_sports
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01316127_sports_4",
"object_type": "sports",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch26/objects365_v2_01316127.jpg",
"question": "What's the number of sports car in this image?",
"index": "object_counting_sports_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many lighter in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_lighter
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01965123_lighter_0",
"object_type": "lighter",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 766,
"height": 1024
},
"original_image_path": "counting/objects365_counting/images/v2/patch42/objects365_v2_01965123.jpg",
"question": "How many lighter in this image?",
"index": "object_counting_lighter_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many tablet in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tablet
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01637390_tablet_0",
"object_type": "tablet",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 375,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch34/objects365_v2_01637390.jpg",
"question": "<image>\nHow many tablet in this image?",
"index": "object_counting_tablet_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of wallet/purse in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_wallet
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00728797_wallet_7",
"object_type": "wallet",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 683,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch15/objects365_v1_00728797.jpg",
"question": "Please tell me the count of wallet/purse in this image?",
"index": "object_counting_wallet_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of extension cord in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_extension
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01656890_extension_2",
"object_type": "extension",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 683
},
"original_image_path": "counting/objects365_counting/images/v2/patch34/objects365_v2_01656890.jpg",
"question": "Can you count the number of extension cord in this image?",
"index": "object_counting_extension_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many hot dog can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_hot
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01602176_hot_9",
"object_type": "hot",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch33/objects365_v2_01602176.jpg",
"question": "How many hot dog can you see in this image?",
"index": "object_counting_hot_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of hurdle in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_hurdle
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01063865_hurdle_5",
"object_type": "hurdle",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 678
},
"original_image_path": "counting/objects365_counting/images/v2/patch20/objects365_v2_01063865.jpg",
"question": "Please tell me the count of hurdle in this image?",
"index": "object_counting_hurdle_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of desk in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_desk
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01275095_desk_4",
"object_type": "desk",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 375,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch25/objects365_v2_01275095.jpg",
"question": "What's the number of desk in this image?",
"index": "object_counting_desk_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of medal in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_medal
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00317059_medal_2",
"object_type": "medal",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 857,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch6/objects365_v1_00317059.jpg",
"question": "Please tell me the count of medal in this image?",
"index": "object_counting_medal_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many flask in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_flask
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_00922698_flask_10",
"object_type": "flask",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 440
},
"original_image_path": "counting/objects365_counting/images/v2/patch16/objects365_v2_00922698.jpg",
"question": "How many flask in this image?",
"index": "object_counting_flask_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of key in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_key
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01193883_key_3",
"object_type": "key",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 333,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch23/objects365_v2_01193883.jpg",
"question": "What's the number of key in this image?",
"index": "object_counting_key_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many ship in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_ship
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00731819_ship_5",
"object_type": "ship",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 569,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch15/objects365_v1_00731819.jpg",
"question": "How many ship in this image?",
"index": "object_counting_ship_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of washing machine/drying machine in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_washing
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01997684_washing_6",
"object_type": "washing",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch43/objects365_v2_01997684.jpg",
"question": "What's the number of washing machine/drying machine in this image?",
"index": "object_counting_washing_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many golf club can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_golf
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01691351_golf_5",
"object_type": "golf",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 1024
},
"original_image_path": "counting/objects365_counting/images/v2/patch35/objects365_v2_01691351.jpg",
"question": "How many golf club can you see in this image?",
"index": "object_counting_golf_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many flask in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_flask
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01286279_flask_7",
"object_type": "flask",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 333
},
"original_image_path": "counting/objects365_counting/images/v2/patch25/objects365_v2_01286279.jpg",
"question": "How many flask in this image?",
"index": "object_counting_flask_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many papaya can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_papaya
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01101443_papaya_4",
"object_type": "papaya",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch21/objects365_v2_01101443.jpg",
"question": "How many papaya can you see in this image?",
"index": "object_counting_papaya_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of tuba in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_tuba
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01488641_tuba_4",
"object_type": "tuba",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 279
},
"original_image_path": "counting/objects365_counting/images/v2/patch30/objects365_v2_01488641.jpg",
"question": "Please tell me the count of tuba in this image?",
"index": "object_counting_tuba_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of converter in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_converter
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00239564_converter_3",
"object_type": "converter",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 772,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch5/objects365_v1_00239564.jpg",
"question": "Please tell me the count of converter in this image?",
"index": "object_counting_converter_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of trolley in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_trolley
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01136069_trolley_0",
"object_type": "trolley",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch21/objects365_v2_01136069.jpg",
"question": "<image>\nWhat's the number of trolley in this image?",
"index": "object_counting_trolley_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many curling in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_curling
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00125505_curling_7",
"object_type": "curling",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 770,
"height": 512
},
"original_image_path": "counting/objects365_counting/images/v1/patch2/objects365_v1_00125505.jpg",
"question": "How many curling in this image?",
"index": "object_counting_curling_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of oyster in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_oyster
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01455910_oyster_0",
"object_type": "oyster",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch29/objects365_v2_01455910.jpg",
"question": "What's the number of oyster in this image?",
"index": "object_counting_oyster_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many backpack can you see in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_backpack
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01771120_backpack_2",
"object_type": "backpack",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 683
},
"original_image_path": "counting/objects365_counting/images/v2/patch37/objects365_v2_01771120.jpg",
"question": "How many backpack can you see in this image?",
"index": "object_counting_backpack_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of golf club in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_golf
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01277970_golf_3",
"object_type": "golf",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 375
},
"original_image_path": "counting/objects365_counting/images/v2/patch25/objects365_v2_01277970.jpg",
"question": "Can you count the number of golf club in this image?",
"index": "object_counting_golf_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of pomegranate in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_pomegranate
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00289616_pomegranate_4",
"object_type": "pomegranate",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 640,
"height": 480
},
"original_image_path": "counting/objects365_counting/images/v1/patch6/objects365_v1_00289616.jpg",
"question": "What's the number of pomegranate in this image?",
"index": "object_counting_pomegranate_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of refrigerator in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_refrigerator
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01069422_refrigerator_0",
"object_type": "refrigerator",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 476,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch20/objects365_v2_01069422.jpg",
"question": "<image>\nWhat's the number of refrigerator in this image?",
"index": "object_counting_refrigerator_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of french in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_french
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01452926_french_0",
"object_type": "french",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 768,
"height": 1023
},
"original_image_path": "counting/objects365_counting/images/v2/patch29/objects365_v2_01452926.jpg",
"question": "Can you count the number of french in this image?",
"index": "object_counting_french_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of urinal in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_urinal
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01075611_urinal_12",
"object_type": "urinal",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch20/objects365_v2_01075611.jpg",
"question": "Please tell me the count of urinal in this image?",
"index": "object_counting_urinal_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Please tell me the count of treadmill in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_treadmill
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_02160457_treadmill_0",
"object_type": "treadmill",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 500,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch47/objects365_v2_02160457.jpg",
"question": "Please tell me the count of treadmill in this image?",
"index": "object_counting_treadmill_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of cigar/cigarette in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_cigar
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01750983_cigar_8",
"object_type": "cigar",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch37/objects365_v2_01750983.jpg",
"question": "Can you count the number of cigar/cigarette in this image?",
"index": "object_counting_cigar_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "What's the number of air conditioner in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_air
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01985446_air_5",
"object_type": "air",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 333,
"height": 500
},
"original_image_path": "counting/objects365_counting/images/v2/patch43/objects365_v2_01985446.jpg",
"question": "What's the number of air conditioner in this image?",
"index": "object_counting_air_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many gloves in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_gloves
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00636210_gloves_2",
"object_type": "gloves",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 640,
"height": 427
},
"original_image_path": "counting/objects365_counting/images/v1/patch13/objects365_v1_00636210.jpg",
"question": "How many gloves in this image?",
"index": "object_counting_gloves_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of egg tart in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_egg
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01267352_egg_3",
"object_type": "egg",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 685
},
"original_image_path": "counting/objects365_counting/images/v2/patch25/objects365_v2_01267352.jpg",
"question": "Can you count the number of egg tart in this image?",
"index": "object_counting_egg_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "Can you count the number of suv in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_suv
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v1_00620429_suv_0",
"object_type": "suv",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 640,
"height": 429
},
"original_image_path": "counting/objects365_counting/images/v1/patch13/objects365_v1_00620429.jpg",
"question": "<image>\nCan you count the number of suv in this image?",
"index": "object_counting_suv_0"
}
|
|
objects365_counting
|
[
{
"content": "You are a helpful assistant.\n\n# Tools\nYou may call one or more functions to assist with the user query.\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>\n{\"type\":\"function\",\"function\":{\"name\":\"image_zoom_in_tool\",\"description\":\"Zoom in on a specific region of an image by cropping it based on a bounding box (bbox) and an optional object label.\",\"parameters\":{\"type\":\"object\",\"properties\":{\"bbox_2d\":{\"type\":\"array\",\"items\":{\"type\":\"number\"},\"minItems\":4,\"maxItems\":4,\"description\":\"The bounding box of the region to zoom in, as [x1, y1, x2, y2], where (x1, y1) is the top-left corner and (x2, y2) is the bottom-right corner.\"},\"label\":{\"type\":\"string\",\"description\":\"The name or label of the object in the specified bounding box (optional).\"}},\"required\":[\"bbox\"]}}}\n</tools>\n\n# How to call a tool\nReturn a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call>\n\n**Example**: \n<tool_call> \n{\"name\": \"image_zoom_in_tool\", \"arguments\": {\"bbox_2d\": [10, 20, 100, 200], \"label\": \"the apple on the desk\"}} \n</tool_call>",
"role": "system"
},
{
"content": "How many helicopter in this image?\nThink first, call **image_zoom_in_tool** if needed, then answer. Format strictly as: <think>...</think> <tool_call>...</tool_call> (if tools needed) <answer>...</answer> ",
"role": "user"
}
] |
object_counting_helicopter
|
visual_toolbox_v2
|
{
"ground_truth": "0"
}
|
{
"id": "objects365_count_cot_train_objects365_v2_01200761_helicopter_7",
"object_type": "helicopter",
"count": 0,
"bounding_boxes": {
"x": [],
"y": [],
"width": [],
"height": []
},
"image_size": {
"width": 1024,
"height": 768
},
"original_image_path": "counting/objects365_counting/images/v2/patch23/objects365_v2_01200761.jpg",
"question": "How many helicopter in this image?",
"index": "object_counting_helicopter_0"
}
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 40