Datasets:
datasetId
large_stringlengths 6
118
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-09-30 12:15:29
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-09-30 12:08:01
| trending_score
float64 0
64
| card
large_stringlengths 31
1M
|
---|---|---|---|---|---|---|---|---|---|
Gusanidas/neural_activations_5
|
Gusanidas
|
2025-04-16T07:14:58Z
| 18 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-16T07:14:34Z
| 0 |
---
dataset_info:
features:
- name: neural_data
sequence: float64
- name: activation_data
sequence: float32
- name: start_time
dtype: float64
- name: end_time
dtype: float64
- name: index
dtype: int64
splits:
- name: train
num_bytes: 122809280
num_examples: 6116
download_size: 88755562
dataset_size: 122809280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CaoLulu7829461232512/Traffic
|
CaoLulu7829461232512
|
2024-11-27T02:31:42Z
| 13 | 0 |
[
"license:apache-2.0",
"region:us"
] |
[] |
2024-11-27T02:31:42Z
| 0 |
---
license: apache-2.0
---
|
exafluence/Open-MedQA-Nexus
|
exafluence
|
2024-10-15T03:10:33Z
| 39 | 0 |
[
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"medicine",
"healthcare"
] |
[
"question-answering",
"text-generation"
] |
2024-10-15T02:43:28Z
| 0 |
---
license: apache-2.0
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: source
dtype: string
- name: source_url
dtype: string
splits:
- name: train
num_bytes: 1330442127
num_examples: 646749
download_size: 602658811
dataset_size: 1330442127
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- medicine
- healthcare
size_categories:
- 100K<n<1M
---
# Open Nexus MedQA
<!-- Provide a quick summary of the dataset. -->
This dataset combines various publicly available medical datasets like ChatDoctor, icliniq, etc., into a unified format for training and evaluating medical question-answering models.
## Dataset Details
<!-- Provide a longer summary of what this dataset is. -->
Open Nexus MedQA is a comprehensive dataset designed to facilitate the development of advanced medical question answering systems. It integrates diverse medical data sources, meticulously processed to provide a uniform format. The format includes:
Instructions: Clear and concise instructions for each question.
Inputs: Medical queries ranging from simple to complex.
Outputs: Accurate and informative responses to the corresponding questions.
Source Information: Details about the original dataset from which each example was derived.
- **Curated by:** Exafluence Inc
- **Shared by:** Exafluence Inc
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
Open Nexus MedQA integrates data from a diverse range of publicly available medical datasets. Here's a breakdown of the sources:
**ChatDoctor-based Datasets:**
- Alpaca Data - ChatDoctor: [Link](https://github.com/Kent0n-Li/ChatDoctor/)
- icliniq.com - ChatDoctor: [Link](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view)
- HealthCareMagic.com - ChatDoctor: [Link](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view)
**Hugging Face Datasets:**)
- CareQA - HPAI-BSC: [Link](https://huggingface.co/datasets/HPAI-BSC/CareQA)
- medmcqa_mixtral_cot - HPAI-BSC: [Link](https://huggingface.co/datasets/HPAI-BSC/medmcqa-cot)
- medqa_mixtral_cot - HPAI-BSC: [Link](https://huggingface.co/datasets/HPAI-BSC/medqa-cot)
- pubmedqa_mixtral_cot - HPAI-BSC: [Link](https://huggingface.co/datasets/HPAI-BSC/pubmedqa-cot)
**Other Datasets:**
- MedInstruct-52k: [Link](https://huggingface.co/datasets/lavita/AlpaCare-MedInstruct-52k)
- US QBank: [Link](https://github.com/jind11/MedQA)
**Note:** We actively encourage users to explore the original datasets for further details. References to the original datasets will be provided within the dataset metadata.
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
Open Nexus MedQA can be used for various purposes:
- Research: Train and evaluate medical question answering models.
- Development: Build and improve AI-powered medical applications (chatbots, virtual assistants, diagnostic tools).
- Education: Enhance the understanding of medical information retrieval for students and professionals.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
- Direct diagnosis or treatment: The dataset is not intended for medical diagnosis or treatment. Consult with qualified healthcare professionals for proper medical care.
- Commercial use without permission: The initial release allows non-commercial use. Refer to the license for commercial applications.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset contains records in a unified format:
- Instruction: Text indicating the task or question.
- Input: Medical query or prompt for the question.
- Output: Corresponding accurate and informative answer.
- Source: Information about the original dataset from which the record originated.
- Source URL: URL link for source dataset
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
We aimed to create a comprehensive and diverse medical question-answering dataset by merging various public datasets. This unified format allows researchers and developers to build robust medical NLP models.
### Source Data
The dataset integrates publicly available medical datasets like ChatDoctor, icliniq, careqa, healthcare-magic, pubmed qa, medqa, med mcqa, med instruct, and us qbank.
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
Each source dataset underwent various processing steps to achieve a consistent format:
- Data Extraction: Relevant data points (instructions, inputs, outputs) were extracted from each source.
- Normalization: Text processing steps like cleaning, tokenization, and normalization were applied.
- Alignment: Data was aligned to the unified format with instruction, input, output, and source information columns.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The source datasets were created by various independent organizations or researchers. We acknowledge their contributions and provide references to the original sources within the dataset metadata.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
## Dataset Card Authors
[Jeevan J](https://huggingface.co/jeevan-exa)
|
MCINext/persian-web-document-retrieval
|
MCINext
|
2025-06-02T16:00:12Z
| 65 | 0 |
[
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-09T12:15:57Z
| 0 |
---
configs:
- config_name: default
data_files:
- split: train
path: qrels/train.jsonl
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
## Dataset Summary
**Persian Web Document Retrieval** is a Persian (Farsi) dataset designed for the **Retrieval** task. It is a component of the [FaMTEB (Farsi Massive Text Embedding Benchmark)](https://huggingface.co/spaces/mteb/leaderboard). This dataset consists of real-world queries collected from the Zarrebin search engine and web documents labeled by humans for relevance. It is curated to evaluate model performance in web search scenarios.
* **Language(s):** Persian (Farsi)
* **Task(s):** Retrieval (Web Search)
* **Source:** Collected from Zarrebin search engine logs and human-labeled documents
* **Part of FaMTEB:** Yes
## Supported Tasks and Leaderboards
The dataset benchmarks how well text embedding models can retrieve relevant web documents in response to real user queries in Persian. This is crucial for search engines and information access systems. Results can be explored on the [Persian MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) (filter by language: Persian).
## Construction
1. Real search queries were sourced from **Zarrebin**, a major Persian-language search engine.
2. Relevant documents were retrieved and **manually labeled** for query-document relevance.
3. The final dataset reflects authentic web search behavior and document diversity in Persian.
4. The dataset is referenced in the FaMTEB paper as “Davani et al., 2023” ([Paper ID: 10553090]).
## Data Splits
* **Train:** 245,692 samples
* **Development (Dev):** 0 samples
* **Test:** 175,472 samples
|
AlekseyCalvin/Lyrical_Rus2Eng_ver4_forRWKV_RLtrainer_jsonl
|
AlekseyCalvin
|
2025-09-29T11:10:29Z
| 0 | 0 |
[
"task_categories:translation",
"language:ru",
"language:en",
"license:apache-2.0",
"size_categories:1K<n<10K",
"region:us",
"poetry",
"songs",
"professional",
"manual",
"creative",
"lyrics",
"Russian",
"English",
"translation",
"bilingual",
"ORPO",
"DPO",
"CPO",
"literature",
"literary",
"lyrical",
"reinforcement",
"training",
"human-curated",
"high-fidelity",
"culture",
"interlinear",
"contrastive",
"soviet",
"rock"
] |
[
"translation"
] |
2025-09-29T11:07:42Z
| 0 |
---
license: apache-2.0
task_categories:
- translation
language:
- ru
- en
tags:
- poetry
- songs
- professional
- manual
- creative
- lyrics
- Russian
- English
- translation
- bilingual
- ORPO
- DPO
- CPO
- translation
- literature
- literary
- lyrical
- reinforcement
- training
- human-curated
- manual
- professional
- high-fidelity
- culture
- interlinear
- contrastive
- literary
- soviet
- rock
size_categories:
- 1K<n<10K
---
## Song-lyrics & Poems by seminal & obscure Soviet & Russian songwriters, bands, & poets.<br>
**EDITED VARIANT 4** <br>
**Re-balanced, edited, substantially abridged/consolidated, somewhat re-expanded** <br>
**JSONLines Version, no more excessive separators, category titles altered for the [RWKV LM RLHF trainer](https://github.com/OpenMOSE/RWKV-LM-RLHF)** <br>
ALTERNATE VERSION OF THE DATASET (3 columns: prompt, chosen, reject)<br>
Manually translated to English by Aleksey Calvin, with a painstaking effort to cross-linguistically reproduce source texts' phrasal/phonetic, rhythmic, metric, syllabic, melodic, and other lyrical/performance-catered features, whilst retaining adequate semantic/significational fidelity. <br>
**This dataset samples months and years of inspired and exhausting labors of translation, composition, and poetic/lyrical/musical adaptation.** <br>
This repo's variant of the dataset was compiled/structured for ORPO-style fine-tuning of LLMs. <br>
The sampling herein constitues a variated inter-mixture of single-line lyrical fragments (appearing most frequently), entire songs/poems, or/and verse/chorus-length song/poem excerpts (often 4-line quatrains of 2x2 couplets in abab or aabb rhyme schemes). <br>
Each row contains the following categories/columns: {prompt}, {chosen}, {rejected}. <br>
{prompt} = source lyrics (either song line, song segment (verse, chorus, etc), or entire song <br>
{chosen} = "lyrically-informed" translation of the source lyric by an experienced/trained human literary translator and bilingual songwriter-performer,
{rejected} = direct/standard translation by an LLM (Gemini 2.5 Pro, Gwen3, and others) or a widely-used specialized translation software tool with stable, but unremarkable, translation abilities (DeepL) <br>
**Translator/Editor/Data-curator**: *Aleksey Calvin Tsukanov (aka A.C.T. soon®)* (multilingual literary translator/archivist, multimedia artist, ML developer/enthusiast, curator of SilverAgePoets.com, and editor/publisher of small-press versebooks, songbooks, and other publications). <br>
|
michsethowusu/fulah-ganda_sentence-pairs
|
michsethowusu
|
2025-04-02T11:42:02Z
| 8 | 0 |
[
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-02T11:41:38Z
| 0 |
---
dataset_info:
features:
- name: score
dtype: float32
- name: Fulah
dtype: string
- name: Ganda
dtype: string
splits:
- name: train
num_bytes: 14295107
num_examples: 91895
download_size: 14295107
dataset_size: 14295107
configs:
- config_name: default
data_files:
- split: train
path: Fulah-Ganda_Sentence-Pairs.csv
---
# Fulah-Ganda_Sentence-Pairs Dataset
This dataset contains sentence pairs for African languages along with similarity scores. It can be used for machine translation, sentence alignment, or other natural language processing tasks.
This dataset is based on the NLLBv1 dataset, published on OPUS under an open-source initiative led by META. You can find more information here: [OPUS - NLLB-v1](https://opus.nlpl.eu/legacy/NLLB-v1.php)
## Metadata
- **File Name**: Fulah-Ganda_Sentence-Pairs
- **Number of Rows**: 91895
- **Number of Columns**: 3
- **Columns**: score, Fulah, Ganda
## Dataset Description
The dataset contains sentence pairs in African languages with an associated similarity score. Each row consists of three columns:
1. `score`: The similarity score between the two sentences (range from 0 to 1).
2. `Fulah`: The first sentence in the pair (language 1).
3. `Ganda`: The second sentence in the pair (language 2).
This dataset is intended for use in training and evaluating machine learning models for tasks like translation, sentence similarity, and cross-lingual transfer learning.
## References
Below are papers related to how the data was collected and used in various multilingual and cross-lingual applications:
[1] Holger Schwenk and Matthijs Douze, Learning Joint Multilingual Sentence Representations with Neural Machine Translation, ACL workshop on Representation Learning for NLP, 2017
[2] Holger Schwenk and Xian Li, A Corpus for Multilingual Document Classification in Eight Languages, LREC, pages 3548-3551, 2018.
[3] Holger Schwenk, Filtering and Mining Parallel Data in a Joint Multilingual Space ACL, July 2018
[4] Alexis Conneau, Guillaume Lample, Ruty Rinott, Adina Williams, Samuel R. Bowman, Holger Schwenk and Veselin Stoyanov, XNLI: Cross-lingual Sentence Understanding through Inference, EMNLP, 2018.
[5] Mikel Artetxe and Holger Schwenk, Margin-based Parallel Corpus Mining with Multilingual Sentence Embeddings arXiv, Nov 3 2018.
[6] Mikel Artetxe and Holger Schwenk, Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond arXiv, Dec 26 2018.
[7] Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia arXiv, July 11 2019.
[8] Holger Schwenk, Guillaume Wenzek, Sergey Edunov, Edouard Grave and Armand Joulin CCMatrix: Mining Billions of High-Quality Parallel Sentences on the WEB
[9] Paul-Ambroise Duquenne, Hongyu Gong, Holger Schwenk, Multimodal and Multilingual Embeddings for Large-Scale Speech Mining, NeurIPS 2021, pages 15748-15761.
[10] Kevin Heffernan, Onur Celebi, and Holger Schwenk, Bitext Mining Using Distilled Sentence Representations for Low-Resource Languages
|
Parth/piiforprivacy
|
Parth
|
2025-02-11T10:33:28Z
| 16 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-11T10:33:20Z
| 0 |
---
dataset_info:
features:
- name: source_text
dtype: string
- name: target_text
dtype: string
- name: privacy_mask
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: value
dtype: string
- name: span_labels
dtype: string
- name: mbert_text_tokens
sequence: string
- name: mbert_bio_labels
sequence: string
- name: id
dtype: int64
- name: language
dtype: string
- name: set
dtype: string
- name: target_text_new
dtype: string
splits:
- name: train
num_bytes: 241703337
num_examples: 135621
download_size: 85693451
dataset_size: 241703337
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hamedrahimi/user-vlm-childrenQA
|
hamedrahimi
|
2025-02-01T15:59:17Z
| 20 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-01T15:59:13Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: user_profile
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 20945662.0
num_examples: 930
download_size: 20835337
dataset_size: 20945662.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/Asap7772open_web_math_raw_733334_766668
|
Asap7772
|
2025-02-12T03:57:26Z
| 14 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-11T08:30:07Z
| 0 |
---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
splits:
- name: train
num_bytes: 308009933
num_examples: 25000
download_size: 136471651
dataset_size: 308009933
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ErikaaWang/M_0_generated__truthfulqa_mc1__mistral_score_gemma
|
ErikaaWang
|
2025-06-13T17:14:32Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-13T17:14:27Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: mc1_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
- name: mc2_targets
struct:
- name: choices
sequence: string
- name: labels
sequence: int64
- name: prompt
dtype: string
- name: correct_answer
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: responses
sequence: string
- name: evaluation_response
sequence:
sequence: string
splits:
- name: train
num_bytes: 5202732
num_examples: 490
download_size: 1753133
dataset_size: 5202732
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cchoi1/kodcode-complete_1000_qwen7b_sol_iter0_att10_sol5_lr5e5_3ep
|
cchoi1
|
2025-05-02T13:39:54Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-02T13:39:51Z
| 0 |
---
dataset_info:
features:
- name: mutation_id
dtype: int64
- name: task_id
dtype: string
- name: mutator_prompt
dtype: string
- name: solver_prompt
dtype: string
- name: response
dtype: string
- name: mutation_explanation
dtype: string
- name: mutation_info
dtype: string
- name: mutator_score
dtype: float64
- name: solution_scores
dtype: string
- name: solutions
dtype: string
- name: solutions_explanation
dtype: string
- name: solutions_info
dtype: string
splits:
- name: train
num_bytes: 61339186
num_examples: 4439
download_size: 12522891
dataset_size: 61339186
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
abhinav302019/olympiad_data_10079
|
abhinav302019
|
2025-03-05T20:34:50Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-05T20:34:49Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 60811
num_examples: 6
download_size: 47717
dataset_size: 60811
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LeRobot-worldwide-hackathon/14-parcelot-5ep
|
LeRobot-worldwide-hackathon
|
2025-06-15T04:36:38Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-06-15T04:36:31Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "bimanual_parcelot",
"total_episodes": 2,
"total_frames": 895,
"total_tasks": 1,
"total_videos": 6,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow_flex.pos",
"left_wrist_flex.pos",
"left_wrist_roll.pos",
"left_gripper.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow_flex.pos",
"right_wrist_flex.pos",
"right_wrist_roll.pos",
"right_gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
12
],
"names": [
"left_shoulder_pan.pos",
"left_shoulder_lift.pos",
"left_elbow_flex.pos",
"left_wrist_flex.pos",
"left_wrist_roll.pos",
"left_gripper.pos",
"right_shoulder_pan.pos",
"right_shoulder_lift.pos",
"right_elbow_flex.pos",
"right_wrist_flex.pos",
"right_wrist_roll.pos",
"right_gripper.pos"
]
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
TAUR-dev/evals__long_multiplication__four_digit__train__r1__3to8dig__samples
|
TAUR-dev
|
2025-04-03T21:26:01Z
| 8 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-03T21:25:59Z
| 0 |
---
dataset_info:
features:
- name: doc_id
dtype: int64
- name: doc
struct:
- name: answer
dtype: string
- name: problem
dtype: string
- name: question
dtype: string
- name: solution
dtype: string
- name: target
dtype: string
- name: arguments
struct:
- name: gen_args_0
struct:
- name: arg_0
dtype: string
- name: arg_1
struct:
- name: do_sample
dtype: bool
- name: max_gen_toks
dtype: int64
- name: temperature
dtype: float64
- name: until
sequence: 'null'
- name: resps
sequence:
sequence: string
- name: filtered_resps
sequence: string
- name: doc_hash
dtype: string
- name: prompt_hash
dtype: string
- name: target_hash
dtype: string
- name: exact_match
dtype: int64
- name: extracted_answers
sequence: string
splits:
- name: train
num_bytes: 14328410
num_examples: 100
download_size: 2042710
dataset_size: 14328410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ShawnGao911101/test
|
ShawnGao911101
|
2025-02-12T03:24:04Z
| 16 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-12T03:22:11Z
| 0 |
---
license: apache-2.0
---
|
aalexchengg/variance_subset
|
aalexchengg
|
2025-04-23T08:54:24Z
| 21 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-23T08:54:23Z
| 0 |
---
dataset_info:
features:
- name: repo
dtype: string
- name: path
dtype: string
- name: func_name
dtype: string
- name: original_string
dtype: string
- name: language
dtype: string
- name: code
dtype: string
- name: code_tokens
sequence: string
- name: docstring
dtype: string
- name: docstring_tokens
sequence: string
- name: sha
dtype: string
- name: url
dtype: string
- name: partition
dtype: string
- name: summary
dtype: string
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 16253
num_examples: 1
download_size: 40571
dataset_size: 16253
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juyoung-trl/KoMagpie-raw
|
juyoung-trl
|
2024-11-08T08:59:23Z
| 19 | 0 |
[
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-08T07:59:59Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: additionals
struct:
- name: model
dtype: string
splits:
- name: train
num_bytes: 3720289639
num_examples: 2569865
download_size: 2078089086
dataset_size: 3720289639
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FrancophonIA/Termcat_Physical_Geography
|
FrancophonIA
|
2025-03-29T22:45:21Z
| 24 | 0 |
[
"task_categories:translation",
"language:ca",
"language:eng",
"language:fra",
"language:spa",
"language:deu",
"language:ita",
"license:cc-by-nd-4.0",
"region:us"
] |
[
"translation"
] |
2025-01-03T20:21:22Z
| 0 |
---
language:
- ca
- eng
- fra
- spa
- deu
- ita
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
license: cc-by-nd-4.0
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/lcr/19336
## Description
Physical Geography terms
## Citation
```
Termcat Physical Geography. (2022). Version unspecified. [Dataset (Lexical/Conceptual Resource)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/lcr/19336
```
|
nyuuzyou/journals
|
nyuuzyou
|
2025-02-27T18:53:20Z
| 40 | 2 |
[
"task_categories:image-classification",
"task_categories:image-to-text",
"annotations_creators:machine-generated",
"multilinguality:monolingual",
"source_datasets:original",
"language:ru",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"modality:image",
"modality:text",
"region:us",
"image"
] |
[
"image-classification",
"image-to-text"
] |
2025-02-27T18:53:02Z
| 0 |
---
pretty_name: Historical Russian Technical Journal Images
size_categories:
- 1K<n<10K
task_categories:
- image-classification
- image-to-text
annotations_creators:
- machine-generated
language:
- ru
license: cc0-1.0
multilinguality:
- monolingual
source_datasets:
- original
tags:
- image
configs:
- config_name: metadata
data_files:
- split: train
path: dataset.jsonl.zst
default: true
- config_name: images
data_files:
- split: train
path: images.zip
---
# Dataset Card for Historical Russian Technical Journal Images
### Dataset Summary
This dataset contains images of pages from old Russian technical journals with descriptions generated using Google Gemini 2.0 Flash.
### Languages
The dataset is monolingual:
- Russian (ru): All journal pages are in Russian with corresponding Russian descriptions
## Dataset Structure
### Data Files
The dataset consists of:
- Image files (.jpg format)
- Corresponding description data in JSONL format containing filenames and descriptions
### Data Splits
All images and captions are in a single split.
## Additional Information
### License
The metadata in this dataset is dedicated to the public domain under the Creative Commons Zero (CC0) license. This means you can:
* Use the metadata for any purpose, including commercial projects
* Modify it however you like
* Distribute it without asking permission
Note that this CC0 license applies ONLY to the descriptions. The actual journal page images and other visual content remain subject to their original copyright and licenses.
CC0 license: https://creativecommons.org/publicdomain/zero/1.0/deed.en
|
abhinav302019/olympiad_data_10110
|
abhinav302019
|
2025-03-05T23:17:09Z
| 14 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-05T23:17:07Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: Known_Solution
dtype: string
- name: Known_Answer
dtype: string
- name: Generated_Solution
dtype: string
- name: Generated_Answer
dtype: string
- name: Judge_Evaluation
dtype: string
- name: Judge_Rating
dtype: string
- name: Judge_Justification
dtype: string
splits:
- name: train
num_bytes: 70455
num_examples: 6
download_size: 51435
dataset_size: 70455
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
passionMan/diabetes_instruct_v4
|
passionMan
|
2025-02-20T00:01:49Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-20T00:00:50Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4965405
num_examples: 9435
download_size: 2460052
dataset_size: 4965405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PlutoG99001/MusicGen-Electronic-Random
|
PlutoG99001
|
2025-02-14T19:52:10Z
| 20 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-14T19:52:08Z
| 0 |
---
dataset_info:
features:
- name: song_id
dtype: int64
- name: filename
dtype: string
- name: audio
dtype: audio
- name: genre_id
dtype: int64
- name: genre
dtype: string
splits:
- name: train
num_bytes: 20759238.605619147
num_examples: 50
download_size: 20392104
dataset_size: 20759238.605619147
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/olympiads_plot
|
mlfoundations-dev
|
2025-01-23T20:31:00Z
| 13 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-23T14:09:57Z
| 0 |
---
dataset_info:
features:
- name: source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 656107.2808117334
num_examples: 226
download_size: 536307
dataset_size: 656107.2808117334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PGLearn/PGLearn-Small-57_ieee-nminus1
|
PGLearn
|
2025-05-17T21:40:39Z
| 0 | 0 |
[
"task_categories:tabular-regression",
"license:cc-by-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"energy",
"optimization",
"optimal_power_flow",
"power_grid"
] |
[
"tabular-regression"
] |
2025-05-17T21:18:10Z
| 0 |
---
license: cc-by-sa-4.0
tags:
- energy
- optimization
- optimal_power_flow
- power_grid
pretty_name: PGLearn Optimal Power Flow (57_ieee, N-1)
task_categories:
- tabular-regression
dataset_info:
config_name: 57_ieee-nminus1
features:
- name: input/pd
sequence: float32
length: 42
- name: input/qd
sequence: float32
length: 42
- name: input/gen_status
sequence: bool
length: 7
- name: input/branch_status
sequence: bool
length: 80
- name: input/seed
dtype: int64
- name: ACOPF/primal/vm
sequence: float32
length: 57
- name: ACOPF/primal/va
sequence: float32
length: 57
- name: ACOPF/primal/pg
sequence: float32
length: 7
- name: ACOPF/primal/qg
sequence: float32
length: 7
- name: ACOPF/primal/pf
sequence: float32
length: 80
- name: ACOPF/primal/pt
sequence: float32
length: 80
- name: ACOPF/primal/qf
sequence: float32
length: 80
- name: ACOPF/primal/qt
sequence: float32
length: 80
- name: ACOPF/dual/kcl_p
sequence: float32
length: 57
- name: ACOPF/dual/kcl_q
sequence: float32
length: 57
- name: ACOPF/dual/vm
sequence: float32
length: 57
- name: ACOPF/dual/pg
sequence: float32
length: 7
- name: ACOPF/dual/qg
sequence: float32
length: 7
- name: ACOPF/dual/ohm_pf
sequence: float32
length: 80
- name: ACOPF/dual/ohm_pt
sequence: float32
length: 80
- name: ACOPF/dual/ohm_qf
sequence: float32
length: 80
- name: ACOPF/dual/ohm_qt
sequence: float32
length: 80
- name: ACOPF/dual/pf
sequence: float32
length: 80
- name: ACOPF/dual/pt
sequence: float32
length: 80
- name: ACOPF/dual/qf
sequence: float32
length: 80
- name: ACOPF/dual/qt
sequence: float32
length: 80
- name: ACOPF/dual/va_diff
sequence: float32
length: 80
- name: ACOPF/dual/sm_fr
sequence: float32
length: 80
- name: ACOPF/dual/sm_to
sequence: float32
length: 80
- name: ACOPF/dual/slack_bus
dtype: float32
- name: ACOPF/meta/seed
dtype: int64
- name: ACOPF/meta/formulation
dtype: string
- name: ACOPF/meta/primal_objective_value
dtype: float32
- name: ACOPF/meta/dual_objective_value
dtype: float32
- name: ACOPF/meta/primal_status
dtype: string
- name: ACOPF/meta/dual_status
dtype: string
- name: ACOPF/meta/termination_status
dtype: string
- name: ACOPF/meta/build_time
dtype: float32
- name: ACOPF/meta/extract_time
dtype: float32
- name: ACOPF/meta/solve_time
dtype: float32
- name: DCOPF/primal/va
sequence: float32
length: 57
- name: DCOPF/primal/pg
sequence: float32
length: 7
- name: DCOPF/primal/pf
sequence: float32
length: 80
- name: DCOPF/dual/kcl_p
sequence: float32
length: 57
- name: DCOPF/dual/pg
sequence: float32
length: 7
- name: DCOPF/dual/ohm_pf
sequence: float32
length: 80
- name: DCOPF/dual/pf
sequence: float32
length: 80
- name: DCOPF/dual/va_diff
sequence: float32
length: 80
- name: DCOPF/dual/slack_bus
dtype: float32
- name: DCOPF/meta/seed
dtype: int64
- name: DCOPF/meta/formulation
dtype: string
- name: DCOPF/meta/primal_objective_value
dtype: float32
- name: DCOPF/meta/dual_objective_value
dtype: float32
- name: DCOPF/meta/primal_status
dtype: string
- name: DCOPF/meta/dual_status
dtype: string
- name: DCOPF/meta/termination_status
dtype: string
- name: DCOPF/meta/build_time
dtype: float32
- name: DCOPF/meta/extract_time
dtype: float32
- name: DCOPF/meta/solve_time
dtype: float32
- name: SOCOPF/primal/w
sequence: float32
length: 57
- name: SOCOPF/primal/pg
sequence: float32
length: 7
- name: SOCOPF/primal/qg
sequence: float32
length: 7
- name: SOCOPF/primal/pf
sequence: float32
length: 80
- name: SOCOPF/primal/pt
sequence: float32
length: 80
- name: SOCOPF/primal/qf
sequence: float32
length: 80
- name: SOCOPF/primal/qt
sequence: float32
length: 80
- name: SOCOPF/primal/wr
sequence: float32
length: 80
- name: SOCOPF/primal/wi
sequence: float32
length: 80
- name: SOCOPF/dual/kcl_p
sequence: float32
length: 57
- name: SOCOPF/dual/kcl_q
sequence: float32
length: 57
- name: SOCOPF/dual/w
sequence: float32
length: 57
- name: SOCOPF/dual/pg
sequence: float32
length: 7
- name: SOCOPF/dual/qg
sequence: float32
length: 7
- name: SOCOPF/dual/ohm_pf
sequence: float32
length: 80
- name: SOCOPF/dual/ohm_pt
sequence: float32
length: 80
- name: SOCOPF/dual/ohm_qf
sequence: float32
length: 80
- name: SOCOPF/dual/ohm_qt
sequence: float32
length: 80
- name: SOCOPF/dual/jabr
dtype:
array2_d:
shape:
- 80
- 4
dtype: float32
- name: SOCOPF/dual/sm_fr
dtype:
array2_d:
shape:
- 80
- 3
dtype: float32
- name: SOCOPF/dual/sm_to
dtype:
array2_d:
shape:
- 80
- 3
dtype: float32
- name: SOCOPF/dual/va_diff
sequence: float32
length: 80
- name: SOCOPF/dual/wr
sequence: float32
length: 80
- name: SOCOPF/dual/wi
sequence: float32
length: 80
- name: SOCOPF/dual/pf
sequence: float32
length: 80
- name: SOCOPF/dual/pt
sequence: float32
length: 80
- name: SOCOPF/dual/qf
sequence: float32
length: 80
- name: SOCOPF/dual/qt
sequence: float32
length: 80
- name: SOCOPF/meta/seed
dtype: int64
- name: SOCOPF/meta/formulation
dtype: string
- name: SOCOPF/meta/primal_objective_value
dtype: float32
- name: SOCOPF/meta/dual_objective_value
dtype: float32
- name: SOCOPF/meta/primal_status
dtype: string
- name: SOCOPF/meta/dual_status
dtype: string
- name: SOCOPF/meta/termination_status
dtype: string
- name: SOCOPF/meta/build_time
dtype: float32
- name: SOCOPF/meta/extract_time
dtype: float32
- name: SOCOPF/meta/solve_time
dtype: float32
splits:
- name: train
num_bytes: 5877252078
num_examples: 322804
- name: test
num_bytes: 1469331227
num_examples: 80702
download_size: 9825200275
dataset_size: 7346583305
configs:
- config_name: 57_ieee-nminus1
data_files:
- split: train
path: 57_ieee-nminus1/train-*
- split: test
path: 57_ieee-nminus1/test-*
default: true
---
|
1231czx/llama3_it_non_delete_with_gold_rewardstmp07
|
1231czx
|
2025-01-08T06:54:45Z
| 21 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-08T06:54:43Z
| 0 |
---
dataset_info:
features:
- name: idx
dtype: int64
- name: gt
dtype: string
- name: prompt
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: my_solu
sequence: string
- name: pred
sequence: string
- name: rewards
sequence: bool
splits:
- name: train
num_bytes: 51620723
num_examples: 15000
download_size: 17231087
dataset_size: 51620723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
winniechow/conversation_env_1024
|
winniechow
|
2025-04-19T10:27:10Z
| 20 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-19T10:27:09Z
| 0 |
---
dataset_info:
features:
- name: user_id
dtype: int64
- name: prompt
dtype: string
- name: conversation_id
dtype: string
splits:
- name: train
num_bytes: 285676
num_examples: 1024
download_size: 144994
dataset_size: 285676
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jmarangola/wristDS2
|
jmarangola
|
2025-05-08T01:02:49Z
| 0 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-05-08T01:02:46Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 9,
"total_frames": 2200,
"total_tasks": 1,
"total_videos": 9,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:9"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image.wrist": {
"dtype": "video",
"names": [
"channels",
"height",
"width"
],
"shape": [
3,
240,
320
],
"info": {
"video.fps": 20.0,
"video.height": 240,
"video.width": 320,
"video.channels": 3,
"video.codec": "h264",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.state": {
"dtype": "float32",
"names": null,
"shape": [
10
]
},
"action": {
"dtype": "float32",
"shape": [
10
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
Asap7772/omnimath-qwen-gen__369_738
|
Asap7772
|
2025-03-26T06:14:26Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-26T05:24:06Z
| 0 |
---
dataset_info:
features:
- name: domain
sequence: string
- name: difficulty
dtype: float64
- name: problem
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: source
dtype: string
- name: completion
sequence: string
- name: completion_answer
sequence: string
- name: completion_correct
sequence: bool
- name: completion_succ_rate
dtype: float64
splits:
- name: train
num_bytes: 43843151
num_examples: 369
download_size: 12682838
dataset_size: 43843151
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
carlfeynman/Bharat_NanoFiQA2018_as
|
carlfeynman
|
2025-01-21T03:25:50Z
| 56 | 0 |
[
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:NanoFiQA2018",
"language:as",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] |
[
"text-retrieval"
] |
2025-01-21T03:20:24Z
| 0 |
---
language:
- as
license: cc-by-4.0
multilinguality:
- monolingual
source_datasets:
- NanoFiQA2018
task_categories:
- text-retrieval
task_ids:
- document-retrieval
tags:
- text-retrieval
dataset_info:
- config_name: corpus
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
- config_name: qrels
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
splits:
- name: train
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: train
configs:
- config_name: corpus
data_files:
- split: train
path: corpus/train-*
- config_name: qrels
data_files:
- split: train
path: qrels/train-*
- config_name: queries
data_files:
- split: train
path: queries/train-*
---
# Bharat-NanoBEIR: Indian Language Information Retrieval Dataset
## Overview
This dataset is part of the Bharat-NanoBEIR collection, which provides information retrieval datasets for Indian languages. It is derived from the NanoBEIR project, which offers smaller versions of BEIR datasets containing 50 queries and up to 10K documents each.
## Dataset Description
This particular dataset is the Assamese version of the NanoFiQA2018 dataset, specifically adapted for information retrieval tasks. The translation and adaptation maintain the core structure of the original NanoBEIR while making it accessible for Assamese language processing.
## Usage
This dataset is designed for:
- Information Retrieval (IR) system development in Assamese
- Evaluation of multilingual search capabilities
- Cross-lingual information retrieval research
- Benchmarking Assamese language models for search tasks
## Dataset Structure
The dataset consists of three main components:
1. **Corpus**: Collection of documents in Assamese
2. **Queries**: Search queries in Assamese
3. **QRels**: Relevance judgments connecting queries to relevant documents
## Citation
If you use this dataset, please cite:
```
@misc{bharat-nanobeir,
title={Bharat-NanoBEIR: Indian Language Information Retrieval Datasets},
year={2024},
url={https://huggingface.co/datasets/carlfeynman/Bharat_NanoFiQA2018_as}
}
```
## Additional Information
- **Language**: Assamese (as)
- **License**: CC-BY-4.0
- **Original Dataset**: NanoBEIR
- **Domain**: Information Retrieval
## License
This dataset is licensed under CC-BY-4.0. Please see the LICENSE file for details.
|
supergoose/buzz_sources_165_wizardlm-70b
|
supergoose
|
2024-11-10T20:28:46Z
| 18 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-10T20:28:45Z
| 0 |
---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source
dtype: string
- name: stack
dtype: string
splits:
- name: train
num_bytes: 1110721
num_examples: 565
download_size: 600038
dataset_size: 1110721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
GeoMeterData/nphard_sat1
|
GeoMeterData
|
2025-01-30T22:41:37Z
| 51 | 0 |
[
"region:us"
] |
[] |
2024-12-16T02:57:41Z
| 0 |
---
dataset_info:
features:
- name: num_var
dtype: int64
- name: num_clause
dtype: int64
- name: prompt
dtype: string
- name: solution
sequence: string
- name: category
dtype: string
- name: uid
dtype: int64
splits:
- name: train
num_bytes: 2279959
num_examples: 960
download_size: 555532
dataset_size: 2279959
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ADHIZ/zzz
|
ADHIZ
|
2024-11-15T10:23:38Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-15T10:23:35Z
| 0 |
---
dataset_info:
features:
- name: image_path
dtype: string
- name: prompt
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 1294
num_examples: 3
download_size: 4958
dataset_size: 1294
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
andrewsiah/PersonaPromptPersonalLLM_835
|
andrewsiah
|
2024-11-15T07:56:59Z
| 8 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-15T07:56:57Z
| 0 |
---
dataset_info:
features:
- name: personaid_835_response_1_llama3_sfairx
dtype: float64
- name: personaid_835_response_2_llama3_sfairx
dtype: float64
- name: personaid_835_response_3_llama3_sfairx
dtype: float64
- name: personaid_835_response_4_llama3_sfairx
dtype: float64
- name: personaid_835_response_5_llama3_sfairx
dtype: float64
- name: personaid_835_response_6_llama3_sfairx
dtype: float64
- name: personaid_835_response_7_llama3_sfairx
dtype: float64
- name: personaid_835_response_8_llama3_sfairx
dtype: float64
- name: prompt
dtype: string
- name: subset
dtype: string
- name: prompt_id
dtype: int64
- name: response_1
dtype: string
- name: response_1_model
dtype: string
- name: response_2
dtype: string
- name: response_2_model
dtype: string
- name: response_3
dtype: string
- name: response_3_model
dtype: string
- name: response_4
dtype: string
- name: response_4_model
dtype: string
- name: response_5
dtype: string
- name: response_5_model
dtype: string
- name: response_6
dtype: string
- name: response_6_model
dtype: string
- name: response_7
dtype: string
- name: response_7_model
dtype: string
- name: response_8
dtype: string
- name: response_8_model
dtype: string
- name: response_1_gemma_2b
dtype: float64
- name: response_2_gemma_2b
dtype: float64
- name: response_3_gemma_2b
dtype: float64
- name: response_4_gemma_2b
dtype: float64
- name: response_5_gemma_2b
dtype: float64
- name: response_6_gemma_2b
dtype: float64
- name: response_7_gemma_2b
dtype: float64
- name: response_8_gemma_2b
dtype: float64
- name: response_1_gemma_7b
dtype: float64
- name: response_2_gemma_7b
dtype: float64
- name: response_3_gemma_7b
dtype: float64
- name: response_4_gemma_7b
dtype: float64
- name: response_5_gemma_7b
dtype: float64
- name: response_6_gemma_7b
dtype: float64
- name: response_7_gemma_7b
dtype: float64
- name: response_8_gemma_7b
dtype: float64
- name: response_1_mistral_raft
dtype: float64
- name: response_2_mistral_raft
dtype: float64
- name: response_3_mistral_raft
dtype: float64
- name: response_4_mistral_raft
dtype: float64
- name: response_5_mistral_raft
dtype: float64
- name: response_6_mistral_raft
dtype: float64
- name: response_7_mistral_raft
dtype: float64
- name: response_8_mistral_raft
dtype: float64
- name: response_1_mistral_ray
dtype: float64
- name: response_2_mistral_ray
dtype: float64
- name: response_3_mistral_ray
dtype: float64
- name: response_4_mistral_ray
dtype: float64
- name: response_5_mistral_ray
dtype: float64
- name: response_6_mistral_ray
dtype: float64
- name: response_7_mistral_ray
dtype: float64
- name: response_8_mistral_ray
dtype: float64
- name: response_1_mistral_weqweasdas
dtype: float64
- name: response_2_mistral_weqweasdas
dtype: float64
- name: response_3_mistral_weqweasdas
dtype: float64
- name: response_4_mistral_weqweasdas
dtype: float64
- name: response_5_mistral_weqweasdas
dtype: float64
- name: response_6_mistral_weqweasdas
dtype: float64
- name: response_7_mistral_weqweasdas
dtype: float64
- name: response_8_mistral_weqweasdas
dtype: float64
- name: response_1_llama3_sfairx
dtype: float64
- name: response_2_llama3_sfairx
dtype: float64
- name: response_3_llama3_sfairx
dtype: float64
- name: response_4_llama3_sfairx
dtype: float64
- name: response_5_llama3_sfairx
dtype: float64
- name: response_6_llama3_sfairx
dtype: float64
- name: response_7_llama3_sfairx
dtype: float64
- name: response_8_llama3_sfairx
dtype: float64
- name: response_1_oasst_deberta_v3
dtype: float64
- name: response_2_oasst_deberta_v3
dtype: float64
- name: response_3_oasst_deberta_v3
dtype: float64
- name: response_4_oasst_deberta_v3
dtype: float64
- name: response_5_oasst_deberta_v3
dtype: float64
- name: response_6_oasst_deberta_v3
dtype: float64
- name: response_7_oasst_deberta_v3
dtype: float64
- name: response_8_oasst_deberta_v3
dtype: float64
- name: response_1_beaver_7b
dtype: float64
- name: response_2_beaver_7b
dtype: float64
- name: response_3_beaver_7b
dtype: float64
- name: response_4_beaver_7b
dtype: float64
- name: response_5_beaver_7b
dtype: float64
- name: response_6_beaver_7b
dtype: float64
- name: response_7_beaver_7b
dtype: float64
- name: response_8_beaver_7b
dtype: float64
- name: response_1_oasst_pythia_7b
dtype: float64
- name: response_2_oasst_pythia_7b
dtype: float64
- name: response_3_oasst_pythia_7b
dtype: float64
- name: response_4_oasst_pythia_7b
dtype: float64
- name: response_5_oasst_pythia_7b
dtype: float64
- name: response_6_oasst_pythia_7b
dtype: float64
- name: response_7_oasst_pythia_7b
dtype: float64
- name: response_8_oasst_pythia_7b
dtype: float64
- name: response_1_oasst_pythia_1b
dtype: float64
- name: response_2_oasst_pythia_1b
dtype: float64
- name: response_3_oasst_pythia_1b
dtype: float64
- name: response_4_oasst_pythia_1b
dtype: float64
- name: response_5_oasst_pythia_1b
dtype: float64
- name: response_6_oasst_pythia_1b
dtype: float64
- name: response_7_oasst_pythia_1b
dtype: float64
- name: response_8_oasst_pythia_1b
dtype: float64
- name: id
dtype: int64
- name: rformatted_promptresponse_1
dtype: string
- name: rformatted_promptresponse_2
dtype: string
- name: rformatted_promptresponse_3
dtype: string
- name: rformatted_promptresponse_4
dtype: string
- name: rformatted_promptresponse_5
dtype: string
- name: rformatted_promptresponse_6
dtype: string
- name: rformatted_promptresponse_7
dtype: string
- name: rformatted_promptresponse_8
dtype: string
splits:
- name: train
num_bytes: 32409752
num_examples: 1000
download_size: 18423118
dataset_size: 32409752
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "PersonaPromptPersonalLLM_835"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
community-datasets/qanta
|
community-datasets
|
2024-06-26T06:06:55Z
| 50,006 | 4 |
[
"task_categories:question-answering",
"annotations_creators:machine-generated",
"language_creators:found",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:unknown",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"arxiv:1904.04792",
"region:us",
"quizbowl"
] |
[
"question-answering"
] |
2022-03-02T23:29:22Z
| 0 |
---
annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- question-answering
task_ids: []
paperswithcode_id: quizbowl
pretty_name: Quizbowl
tags:
- quizbowl
dataset_info:
- config_name: mode=first,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 117599150
num_examples: 96221
- name: buzztrain
num_bytes: 19699616
num_examples: 16706
- name: guessdev
num_bytes: 1414822
num_examples: 1055
- name: buzzdev
num_bytes: 1553576
num_examples: 1161
- name: guesstest
num_bytes: 2997063
num_examples: 2151
- name: buzztest
num_bytes: 2653365
num_examples: 1953
- name: adversarial
num_bytes: 1258784
num_examples: 1145
download_size: 90840024
dataset_size: 147176376
- config_name: mode=full,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 168874612
num_examples: 96221
- name: buzztrain
num_bytes: 27989445
num_examples: 16706
- name: guessdev
num_bytes: 2098857
num_examples: 1055
- name: buzzdev
num_bytes: 2301145
num_examples: 1161
- name: guesstest
num_bytes: 4434626
num_examples: 2151
- name: buzztest
num_bytes: 3930150
num_examples: 1953
- name: adversarial
num_bytes: 1799969
num_examples: 1145
download_size: 133005755
dataset_size: 211428804
- config_name: mode=runs,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 3975570298
num_examples: 2641161
- name: buzztrain
num_bytes: 622976884
num_examples: 433552
- name: guessdev
num_bytes: 55281178
num_examples: 33602
- name: buzzdev
num_bytes: 60226416
num_examples: 36803
- name: guesstest
num_bytes: 120192213
num_examples: 70772
- name: buzztest
num_bytes: 104422131
num_examples: 63050
- name: adversarial
num_bytes: 37874827
num_examples: 27986
download_size: 306157359
dataset_size: 4976543947
- config_name: mode=sentences,char_skip=25
features:
- name: id
dtype: string
- name: qanta_id
dtype: int32
- name: proto_id
dtype: string
- name: qdb_id
dtype: int32
- name: dataset
dtype: string
- name: text
dtype: string
- name: full_question
dtype: string
- name: first_sentence
dtype: string
- name: char_idx
dtype: int32
- name: sentence_idx
dtype: int32
- name: tokenizations
sequence:
sequence: int32
length: 2
- name: answer
dtype: string
- name: page
dtype: string
- name: raw_answer
dtype: string
- name: fold
dtype: string
- name: gameplay
dtype: bool
- name: category
dtype: string
- name: subcategory
dtype: string
- name: tournament
dtype: string
- name: difficulty
dtype: string
- name: year
dtype: int32
splits:
- name: guesstrain
num_bytes: 629450237
num_examples: 505321
- name: buzztrain
num_bytes: 98941633
num_examples: 82574
- name: guessdev
num_bytes: 9112676
num_examples: 6818
- name: buzzdev
num_bytes: 9924887
num_examples: 7451
- name: guesstest
num_bytes: 19470155
num_examples: 14069
- name: buzztest
num_bytes: 17011859
num_examples: 12610
- name: adversarial
num_bytes: 6491504
num_examples: 5812
download_size: 150604036
dataset_size: 790402951
configs:
- config_name: mode=first,char_skip=25
data_files:
- split: guesstrain
path: mode=first,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=first,char_skip=25/buzztrain-*
- split: guessdev
path: mode=first,char_skip=25/guessdev-*
- split: buzzdev
path: mode=first,char_skip=25/buzzdev-*
- split: guesstest
path: mode=first,char_skip=25/guesstest-*
- split: buzztest
path: mode=first,char_skip=25/buzztest-*
- split: adversarial
path: mode=first,char_skip=25/adversarial-*
- config_name: mode=full,char_skip=25
data_files:
- split: guesstrain
path: mode=full,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=full,char_skip=25/buzztrain-*
- split: guessdev
path: mode=full,char_skip=25/guessdev-*
- split: buzzdev
path: mode=full,char_skip=25/buzzdev-*
- split: guesstest
path: mode=full,char_skip=25/guesstest-*
- split: buzztest
path: mode=full,char_skip=25/buzztest-*
- split: adversarial
path: mode=full,char_skip=25/adversarial-*
- config_name: mode=runs,char_skip=25
data_files:
- split: guesstrain
path: mode=runs,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=runs,char_skip=25/buzztrain-*
- split: guessdev
path: mode=runs,char_skip=25/guessdev-*
- split: buzzdev
path: mode=runs,char_skip=25/buzzdev-*
- split: guesstest
path: mode=runs,char_skip=25/guesstest-*
- split: buzztest
path: mode=runs,char_skip=25/buzztest-*
- split: adversarial
path: mode=runs,char_skip=25/adversarial-*
- config_name: mode=sentences,char_skip=25
data_files:
- split: guesstrain
path: mode=sentences,char_skip=25/guesstrain-*
- split: buzztrain
path: mode=sentences,char_skip=25/buzztrain-*
- split: guessdev
path: mode=sentences,char_skip=25/guessdev-*
- split: buzzdev
path: mode=sentences,char_skip=25/buzzdev-*
- split: guesstest
path: mode=sentences,char_skip=25/guesstest-*
- split: buzztest
path: mode=sentences,char_skip=25/buzztest-*
- split: adversarial
path: mode=sentences,char_skip=25/adversarial-*
---
# Dataset Card for "qanta"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [http://www.qanta.org/](http://www.qanta.org/)
- **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
- **Paper:** [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792)
- **Point of Contact:** [Jordan Boyd-Graber](mailto:[email protected])
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
### Dataset Summary
The Qanta dataset is a question answering dataset based on the academic trivia game Quizbowl.
### Supported Tasks and Leaderboards
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Dataset Structure
### Data Instances
#### mode=first,char_skip=25
- **Size of downloaded dataset files:** 170.75 MB
- **Size of the generated dataset:** 147.18 MB
- **Total amount of disk used:** 317.93 MB
An example of 'guessdev' looks as follows.
```
This example was too long and was cropped:
{
"answer": "Apollo_program",
"category": "History",
"char_idx": -1,
"dataset": "quizdb.org",
"difficulty": "easy_college",
"first_sentence": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"fold": "guessdev",
"full_question": "\"As part of this program, William Anders took a photo that Galen Rowell called \\\"the most influential environmental photograph e...",
"gameplay": false,
"id": "127028-first",
"page": "Apollo_program",
"proto_id": "",
"qanta_id": 127028,
"qdb_id": 126689,
"raw_answer": "Apollo program [or Project Apollo; accept Apollo 8; accept Apollo 1; accept Apollo 11; prompt on landing on the moon]",
"sentence_idx": -1,
"subcategory": "American",
"text": "As part of this program, William Anders took a photo that Galen Rowell called \"the most influential environmental photograph ever taken.\"",
"tokenizations": [[0, 137], [138, 281], [282, 412], [413, 592], [593, 675]],
"tournament": "ACF Fall",
"year": 2016
}
```
### Data Fields
The data fields are the same among all splits.
#### mode=first,char_skip=25
- `id`: a `string` feature.
- `qanta_id`: a `int32` feature.
- `proto_id`: a `string` feature.
- `qdb_id`: a `int32` feature.
- `dataset`: a `string` feature.
- `text`: a `string` feature.
- `full_question`: a `string` feature.
- `first_sentence`: a `string` feature.
- `char_idx`: a `int32` feature.
- `sentence_idx`: a `int32` feature.
- `tokenizations`: a dictionary feature containing:
- `feature`: a `int32` feature.
- `answer`: a `string` feature.
- `page`: a `string` feature.
- `raw_answer`: a `string` feature.
- `fold`: a `string` feature.
- `gameplay`: a `bool` feature.
- `category`: a `string` feature.
- `subcategory`: a `string` feature.
- `tournament`: a `string` feature.
- `difficulty`: a `string` feature.
- `year`: a `int32` feature.
### Data Splits
| name |adversarial|buzzdev|buzztrain|guessdev|guesstrain|buzztest|guesstest|
|-----------------------|----------:|------:|--------:|-------:|---------:|-------:|--------:|
|mode=first,char_skip=25| 1145| 1161| 16706| 1055| 96221| 1953| 2151|
## Dataset Creation
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the source language producers?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Annotations
#### Annotation process
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
#### Who are the annotators?
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Personal and Sensitive Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Discussion of Biases
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Other Known Limitations
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
## Additional Information
### Dataset Curators
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Licensing Information
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Citation Information
```
@article{Rodriguez2019QuizbowlTC,
title={Quizbowl: The Case for Incremental Question Answering},
author={Pedro Rodriguez and Shi Feng and Mohit Iyyer and He He and Jordan L. Boyd-Graber},
journal={ArXiv},
year={2019},
volume={abs/1904.04792}
}
```
### Contributions
Thanks to [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
|
Kasmic/negative_semtiment_small
|
Kasmic
|
2025-04-12T08:32:21Z
| 16 | 0 |
[
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-12T08:19:52Z
| 0 |
---
license: mit
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset contains prompts for a model and reponses that have negative sentiment. Curated using ClaudeAI for fine tunning a model.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** Kasmik Regmi
- **License:** MIT
## Uses
The dataset is intented to be used for fine-tunning a model. But further applications could be unlimited.
## Dataset Card Authors
[email protected]
|
ultimate-dictionary/languages_dataset
|
ultimate-dictionary
|
2025-05-31T21:55:53Z
| 35 | 1 |
[
"language:en",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"dictionary",
"languages",
"language",
"linguistics",
"linguistic"
] |
[] |
2025-05-31T19:54:00Z
| 0 |
---
license: cc-by-4.0
language:
- en
tags:
- dictionary
- languages
- language
- linguistics
- linguistic
pretty_name: Languages Dataset
size_categories:
- 1K<n<10K
---
This dataset contains a set of 8612 languages from across the world as well as data such as Glottocode, ISO-639-3 codes, names, language families etc.
**Original source:** https://glottolog.org/glottolog/language
|
falan42/Turkish-Tunc_M-tts-dataset
|
falan42
|
2025-04-15T17:00:49Z
| 19 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-15T16:57:33Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 745263518.688
num_examples: 4552
download_size: 743074019
dataset_size: 745263518.688
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yinxinyuchen/so101-cyxy-record-v3
|
yinxinyuchen
|
2025-09-23T11:27:50Z
| 94 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
[
"robotics"
] |
2025-09-23T08:11:06Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 10,
"total_frames": 3031,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.env": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
FrancophonIA/Localidades_2007
|
FrancophonIA
|
2025-03-30T14:21:48Z
| 21 | 0 |
[
"task_categories:translation",
"language:deu",
"language:spa",
"language:por",
"language:eng",
"language:fra",
"region:us"
] |
[
"translation"
] |
2024-11-29T20:54:38Z
| 0 |
---
language:
- deu
- spa
- por
- eng
- fra
multilingulality:
- multilingual
task_categories:
- translation
viewer: false
---
> [!NOTE]
> Dataset origin: https://live.european-language-grid.eu/catalogue/corpus/19413
## Citation
```
Localidades 2007 (2022). Version unspecified. [Dataset (Text corpus)]. Source: European Language Grid. https://live.european-language-grid.eu/catalogue/corpus/19413
```
|
garrykuwanto/crd3-raw-dialogues
|
garrykuwanto
|
2025-09-21T14:49:22Z
| 85 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-09-21T03:41:10Z
| 0 |
---
dataset_info:
features:
- name: episode_name
dtype: string
- name: dialogues
sequence:
sequence: string
splits:
- name: train
num_bytes: 67717469
num_examples: 140
download_size: 32567474
dataset_size: 67717469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
supergoose/flan_combined_task1555_scitail_answer_generation
|
supergoose
|
2025-03-10T14:29:23Z
| 15 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-10T14:29:22Z
| 0 |
---
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: _template_idx
dtype: int64
- name: _task_source
dtype: string
- name: _task_name
dtype: string
- name: _template_type
dtype: string
splits:
- name: train
num_bytes: 6262191
num_examples: 9287
download_size: 1531078
dataset_size: 6262191
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/oh_teknium_scaling_down_random_0.6
|
mlfoundations-dev
|
2024-12-21T13:19:01Z
| 57 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-21T13:18:45Z
| 0 |
---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: weight
dtype: float64
splits:
- name: train
num_bytes: 896510723
num_examples: 600930
download_size: 426530700
dataset_size: 896510723
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/c_dfiltered_QwQ-32B_2_madversarial_continue_unrelated_t10
|
reasoning-proj
|
2025-09-24T00:21:40Z
| 78 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-09-23T16:08:39Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
- name: continuation_1
dtype: string
- name: complete_answer_1
dtype: string
- name: continuation_2
dtype: string
- name: complete_answer_2
dtype: string
- name: continuation_3
dtype: string
- name: complete_answer_3
dtype: string
- name: continuation_4
dtype: string
- name: complete_answer_4
dtype: string
- name: continuation_5
dtype: string
- name: complete_answer_5
dtype: string
- name: continuation_6
dtype: string
- name: complete_answer_6
dtype: string
- name: continuation_7
dtype: string
- name: complete_answer_7
dtype: string
- name: continuation_8
dtype: string
- name: complete_answer_8
dtype: string
- name: continuation_model
dtype: string
splits:
- name: train
num_bytes: 152302157
num_examples: 600
download_size: 55638110
dataset_size: 152302157
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/audio_L2-regular-dare_spoken-web-questions
|
chiyuanhsiao
|
2025-05-19T03:28:41Z
| 98 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-02T22:09:46Z
| 0 |
---
dataset_info:
features:
- name: url
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: question_unit
sequence: int64
- name: response_interleaf
dtype: string
- name: response_text
dtype: string
- name: response_tokens
sequence: int64
- name: response_speech
dtype: audio
- name: response_asr
dtype: string
- name: mos_score
dtype: float64
splits:
- name: test
num_bytes: 1220400850.0
num_examples: 2032
download_size: 1143879430
dataset_size: 1220400850.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ishika/realworld_franka
|
ishika
|
2025-04-26T20:01:56Z
| 42 | 0 |
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"realworld",
"panda"
] |
[
"robotics"
] |
2025-04-23T21:19:42Z
| 0 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- realworld
- panda
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "panda",
"total_episodes": 22,
"total_frames": 1970,
"total_tasks": 5,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:22"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"image": {
"dtype": "image",
"shape": [
256,
256,
3
],
"names": [
"height",
"width",
"channel"
]
},
"state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"state"
]
},
"actions": {
"dtype": "float32",
"shape": [
7
],
"names": [
"actions"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
mthandazo/bnil_poj4_algo_classification
|
mthandazo
|
2025-02-11T14:35:30Z
| 23 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-01-30T11:06:00Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 689683649.1446937
num_examples: 101272
- name: test
num_bytes: 229899089.85530624
num_examples: 33758
download_size: 329922983
dataset_size: 919582739.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
mlfoundations-dev/alpaca_scale_x8_test
|
mlfoundations-dev
|
2024-12-07T03:44:01Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-07T03:43:59Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: filtered_reason
dtype: 'null'
- name: filtered_decision
dtype: bool
- name: max_similarity_score
dtype: float64
- name: embedding
sequence: float64
- name: too_similar
dtype: bool
- name: similar_text
dtype: string
- name: similar_text_similarity
dtype: float64
splits:
- name: train
num_bytes: 2714299
num_examples: 772
download_size: 2837078
dataset_size: 2714299
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alea-institute/kl3m-data-dotgov-www.whs.mil
|
alea-institute
|
2025-04-11T01:56:42Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2504.07854",
"arxiv:2503.17247",
"region:us"
] |
[] |
2025-02-02T13:01:14Z
| 0 |
---
dataset_info:
features:
- name: identifier
dtype: string
- name: dataset
dtype: string
- name: mime_type
dtype: string
- name: tokens
sequence: int64
splits:
- name: train
num_bytes: 22291474
num_examples: 207
download_size: 3319001
dataset_size: 22291474
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# KL3M Data Project
> **Note**: This page provides general information about the KL3M Data Project. Additional details specific to this dataset will be added in future updates. For complete information, please visit the [GitHub repository](https://github.com/alea-institute/kl3m-data) or refer to the [KL3M Data Project paper](https://arxiv.org/abs/2504.07854).
## Description
This dataset is part of the [ALEA Institute's](https://aleainstitute.ai/) KL3M Data Project, which provides copyright-clean training resources for large language models.
## Dataset Details
- **Format**: Parquet files containing document text and metadata
- **License**: [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/)
- **Tokenizer**: The `tokens` field uses the [kl3m-004-128k-cased](https://huggingface.co/alea-institute/kl3m-004-128k-cased) tokenizer, a case-sensitive 128K vocabulary tokenizer optimized for legal, financial, and enterprise documents
## Abstract
Practically all large language models have been pre-trained on data that is subject to global uncertainty related to copyright infringement and breach of contract. This creates potential risk for users and developers due to this uncertain legal status. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline that minimizes risks related to copyright or breach of contract.
The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified to meet the strict copyright and licensing protocol detailed in the project. We are releasing the entire pipeline, including:
1. The source code to acquire and process these documents
2. The original document formats with associated provenance and metadata
3. Extracted content in a standardized format
4. Pre-tokenized representations of the documents
5. Various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification, prediction, and conversational data
All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms. We are committed to continuing this project in furtherance of a more ethical, legal, and sustainable approach to the development and use of AI models.
## Legal Basis
This dataset is fully compliant with copyright law and contractual terms. The content is included based on the following legal foundation:
- Public domain materials
- US government works
- Open access content under permissive licenses
- Content explicitly licensed for AI training
## Papers
For more information about the KL3M Data Project, please refer to:
- [The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models](https://arxiv.org/abs/2504.07854)
- [KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications](https://arxiv.org/abs/2503.17247)
## Citation
If you use this dataset in your research, please cite:
```bibtex
@misc{bommarito2025kl3mdata,
title={The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models},
author={Bommarito II, Michael J. and Bommarito, Jillian and Katz, Daniel Martin},
year={2025},
eprint={2504.07854},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
```bibtex
@misc{bommarito2025kl3m,
title={KL3M Tokenizers: A Family of Domain-Specific and Character-Level Tokenizers for Legal, Financial, and Preprocessing Applications},
author={Bommarito II, Michael J. and Katz, Daniel Martin and Bommarito, Jillian},
year={2025},
eprint={2503.17247},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## About ALEA
The ALEA Institute is a non-profit research organization focused on advancing AI for business, law, and governance. Learn more at [https://aleainstitute.ai/](https://aleainstitute.ai/).
|
Sherirto/BD4UI
|
Sherirto
|
2025-04-25T07:35:43Z
| 181 | 1 |
[
"license:cc-by-nd-4.0",
"arxiv:2402.07197",
"arxiv:2306.01984",
"arxiv:2312.00752",
"arxiv:2309.13378",
"arxiv:2404.08522",
"arxiv:2309.13944",
"arxiv:2305.02301",
"arxiv:2305.18290",
"arxiv:2404.19756",
"arxiv:2307.08934",
"arxiv:2303.16458",
"arxiv:2402.11641",
"arxiv:2403.01742",
"arxiv:2309.12971",
"arxiv:2405.16800",
"arxiv:2405.03110",
"arxiv:2406.04035",
"arxiv:2312.06371",
"arxiv:2405.12096",
"arxiv:2406.11945",
"arxiv:2312.12462",
"arxiv:2404.03873",
"arxiv:2410.10329",
"arxiv:2405.16789",
"arxiv:2310.05370",
"arxiv:2409.17386",
"arxiv:2402.14744",
"arxiv:2402.02370",
"arxiv:2305.17333",
"arxiv:2407.09096",
"arxiv:2402.02368",
"arxiv:2412.09243",
"arxiv:2501.02737",
"arxiv:2404.01340",
"arxiv:2308.07707",
"arxiv:2407.05441",
"arxiv:2407.11588",
"region:us"
] |
[] |
2024-10-31T01:39:09Z
| 0 |
---
license: cc-by-nd-4.0
---
# BD4UI 实验室 组会分享仓库
> **PPT文件见Files and versions. PPT文件按分享日期与方法名缩写来命名.**
|日期|分享人|期刊或会议|论文标题|论文链接|论文摘要|
|-|-|-|-|-|-|
| 2024/4/9 |<img width=600/>于志刚| WWW2024| GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks | https://arxiv.org/pdf/2402.07197 | 图模型(GM)往往仅限于预定义格式(如节点分类)内的任务,无法完成开放式任务。LLM则可以完成开放式任务。尽管已经有几种将LLM应用于图的方法,但是它们不能同时处理预定义和开放式任务。本文试图将图模型与大语言模型对齐,从而使得模型能够同时处理预定义和开放式任务。 |
| 2024/4/16 | 杨珂懿 | NeurIPS2023 | DYffusion: A Dynamics-informed Diffusion Modelfor Spatiotemporal Forecasting | https://arxiv.org/pdf/2306.01984 | 提出了一种方法,用于有效地训练扩散模型进行概率时空预测,在这方面,生成稳定和准确的预测仍然是一个挑战。我们的方法,DYffusion,利用数据中的时间动态,将其与模型中的扩散步骤直接耦合。我们训练了一个随机的、时间条件的插值器和预测网络,分别模仿标准扩散模型的正向和反向过程。DYffusion自然地促进了多步骤和长范围的预测,允许高度灵活的、连续时间的采样轨迹,并能在推断时用加速采样来权衡性能。与传统基于高斯噪声的扩散模型相比,显著提高了计算效率。 |
| 2024/4/23 | 曹敏君 | Arxiv.2312 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | https://arxiv.org/pdf/2312.00752 | 本文介绍了一种新的序列模型架构,名为Mamba,它通过选择性状态空间模型(Selective State Space Models, SSMs)来改进传统的状态空间模型。Mamba通过输入依赖的方式调整SSM参数,允许模型根据当前的数据选择性地传递或遗忘信息,从而解决了以前模型在处理离散和信息密集型数据(如文本)时的不足。此外,尽管这种改变使得模型不能使用高效的卷积计算,研究者设计了一种硬件感知的并行算法,以递归模式运行,使得Mamba在推理速度上比传统的Transformer快5倍,并且在序列长度上实现线性缩放。 |
| 2024/4/30 | 程铭 | NeurIPS2023 | Deciphering Spatio-Temporal Graph Forecasting: A Causal Lens and Treatment | https://arxiv.org/pdf/2309.13378 | 时空神经图网络在时空预测过程中存在时间OOD问题和动态空间因果关系问题。 本文提出了一个新的框架CaST,通过使用一种新的解纠缠块的后门调整,将时间环境从输入数据中分离出来。此外,利用前门调整和边缘级卷积来模拟因果关系的连锁效应。 |
| 2024/4/30 | 王颖 | Arxiv.2404 | FUXI-DA: A GENERALIZED DEEP LEARNING DATA ASSIMILATION FRAMEWORK FOR ASSIMILATING SATELLITE OBSERVATIONS | https://arxiv.org/pdf/2404.08522 | 深度学习模型在匹配甚至超过全球领先的NWP模型的预测精度方面显示出了希望。这一成功激发了为天气预报模型量身定制的基于dl的数据分析框架的探索。本文介绍了一种基于dl的广义数据分析框架FuXi-DA,用于同化卫星观测数据。通过吸收风云四号b上先进地球同步辐射成像仪(AGRI)的数据,“FuXi-DA”不断减少分析误差,显著提高预报性能。 |
| 2024/5/7 | 颜浩 | NeurIPS2023 | Provable Training for Graph Contrastive Learning | https://arxiv.org/pdf/2309.13944 | GCL的学习过程主要包括:图数据增强;增强视图经过GNN获取节点表征,最后根据InfoNCE准则来进行优化。但考虑到图结构的复杂性,在GCL过程中,是否所有节点都能很好的遵循InfoNCE准则呢?本文对主流GCL方法进行了分析,发现了GCL训练过程中出现的不平衡现象,并提出了"Node Compactness"概念来度量不同的节点在GCL过程中对准则的遵循程度,所提方法POT能即插即用到其他GCL方法中。 |
| 2024/5/14 | 王梓辰 | NeurIPS2023 | BCDiff: Bidirectional Consistent Diffusion for Instantaneous Trajectory Prediction | https://openreview.net/pdf?id=FOFJmR1oxt | 本文提出用于瞬时轨迹预测的双向一致扩散模型:BCDiff,通过设计一个相互指导机制来开发两个耦合扩散模型,该机制可以双向一致地逐步生成未观察到的历史轨迹和未来轨迹,以利用它们的互补信息相互指导进行预测。其次,由于去噪步骤初始阶段的轨迹存在较高噪声,模型引入一种门控机制学习轨迹预测和有限观察轨迹之间的权重,以平衡它们的贡献。BCDiff是一个无编码器的框架,可以和现有轨迹模型兼容。 |
| 2024/5/21 | 徐榕桧 | ACL2023 | Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes | https://arxiv.org/pdf/2305.02301 | 部署大型语言模型(llm)具有挑战性,因为它们在实际应用中内存效率低下且计算密集型。通常,研究人员通过使用标签数据进行微调或使用llm生成的标签进行蒸馏来训练更小的特定任务模型。然而,微调和蒸馏需要大量的训练数据才能达到与llm相当的性能。本文提出了一种新的机制Distilling Step-by-Step,作者通过提取LLM原理,作为在多任务框架内训练小模型的额外监督。实现(a)训练比llm更小的模型,(b)与微调或蒸馏相比,需要更少的训练数据。 |
| 2024/5/28 | 刘若尘 | NeurIPS2023 | Direct Preference Optimization: Your Language Model is Secretly a Reward Model | https://arxiv.org/pdf/2305.18290 | 如何将大语言模型与人类行为对齐一直以来是NLP领域研究的重点,其中使用人类反馈的强化学习(reinforcement learning from human feedback,RLHF)是其中的代表性工作。然而RLHF的训练过程复杂且不稳定,需要先训练一个奖励函数再通过强化学习过程对大语言模型进行微调,第一篇论文利用了一种奖励函数与最优生成策略(我们需要的LLM)之间的映射关系,将绕过了强化学习过程,实现了端到端的训练。 |
| 2024/5/28 | 张舜洋 | Arxiv.2404 | KAN:Kolmogorov–Arnold Networks | https://arxiv.org/pdf/2404.19756 | 本篇论文自官宣以来,便受到了学术圈广泛关注。它号称是能在部分任务上以更少的参数量,实现MLPs的实验结果。那么本周组会我将介绍KAN的设计思路,并简单分析它的计算量从哪儿来 |
| 2024/6/4 | 金志晗 | J.Comput.Phys2024 | Multi-stage neural networks: Function approximator of machine precision | https://arxiv.org/pdf/2307.08934 | 本文主要针对在实践中即使神经网络规模大,训练迭代时间长,也很难将预测误差降低到𝑂(1e−05)以下。为解决这个问题,作者开发了多阶段神经网络,将训练过程分为不同的阶段,每个阶段使用一个新的网络经优化以适应前一阶段的残差,使残差幅度大幅减小,证明多阶段神经网络有效地减轻了常规神经网络相关的频谱偏差,使其能够捕获目标函数的高频特征。此外,本文还证明回归问题和物理信息神经网络的多阶段训练的预测误差在有限次迭代内几乎可以达到双浮点的机器精度𝑂(1e−16),而仅使用单个神经网络很难达到这样的精确度。 |
| 2024/6/4 | 殷珺 | KDD2023 | When to Pre-Train Graph Neural Networks? From Data Generation Perspective! | https://arxiv.org/pdf/2303.16458 | 本文针对图神经网络预训练的可行性(预期收益)进行研究,基于"当下游任务数据能够以高概率从预训练数据中生成时,预训练具有高可行性"这一假设,引入Graphon作为图生成模块,提出W2PGNN框架。该框架对于给定的下游任务数据和预训练数据,能够估计出GNN预训练的预期收益。作者在节点分类、图分类任务的多个现实数据集上展开实验,验证了W2PGNN可行性估计和下游任务最佳性能之间存在较高的相关性。 |
| 2024/6/11 | 于志刚 | Arxiv.2402 | Towards Versatile Graph Learning Approach: from the Perspective of Large Language Models | https://arxiv.org/pdf/2402.11641 | 图结构数据是现实世界中最常用和最广泛的应用场景。对于不同的应用程序,大量不同的学习任务以及图领域中复杂的图学习过程给人类专家设计通用的图学习方法带来了挑战。面对这些挑战,大语言模型(LLMs)因为具有广泛的知识和类人智能,为之提供了一个潜在的解决方案。本文提出了一个新颖的概念原型,用于设计具有LLMs的通用图学习方法。 |
| 2024/6/19 | 杨珂懿 | ICLR2024 | DIFFUSION-TS: INTERPRETABLE DIFFUSION FOR GENERAL TIME SERIES GENERATION | https://arxiv.org/pdf/2403.01742 | 本文提出了Diffusion-TS,这是一种新颖的基于扩散的框架,通过使用具有解耦时间表示的transformer生成高质量的多变量时间序列样本,设计了深度分解结构来引导Diffusion-TS捕捉时间序列的语义意义,而transformer从噪声模型输入中挖掘详细的序列信息。不同于现有的基于扩散的方法,本文训练模型在每个扩散步骤中直接重建样本而不是噪声,并结合基于傅里叶的损失项。Diffusion-TS有望生成既满足解释性又真实的时间序列。此外,研究表明,Diffusion-TS可以扩展到条件生成任务,如预测和插补,而无需进行任何模型更改。 |
| 2024/6/19 | 曹敏君 | AAAI2024 | Higher-Order Graph Convolutional Network with Flower-Petals Laplacians on Simplicial Complexes | https://arxiv.org/pdf/2309.12971 | 尽管GNN在各种任务上取得了成功,但基于成对网络的基础本质上限制了它们在复杂系统中识别潜在高阶相互作用的能力。为了弥补这一能力差距,本文提出了一种利用简单复合物(SCs)丰富的数学理论的新方法。目前基于SC的GNN具有较高的复杂性和刚性,高阶相互作用强度的量化仍然具有挑战性。本文创新性地提出了一个higher-order Flower-Petals(FP)模型,将FP拉普拉斯算子纳入到SCs中。此外,本文引入了一个基于FP拉普拉斯算子的高阶图卷积网络(HiGCN),能够在不同的拓扑尺度上识别内在特征。通过使用可学习的图过滤器(每个FP拉普拉斯域中的一个参数组),可以识别不同的模式,其中过滤器的权重作为高阶交互强度的可量化度量。实证研究表明,本文所提出的模型在一系列图任务上实现了最先进的性能,并提供了一个可扩展和灵活的解决方案来探索图中的高阶交互。 |
| 2024/7/3 | 颜浩 | Arxiv.2405 | TAGA: Text-Attributed Graph Self-Supervised Learning by Synergizing Graph and Text Mutual Transformations | https://arxiv.org/pdf/2405.16800 | 本属性图上表示学习最近取得了广泛关注,但现有方法大多关注于监督学习范式,对标签数据较为依赖。本文介绍了一种新的文本图上自监督学习框架TAGA。其构造了两种可相互转化的视图,Text-of-Graph与Graph-of-Text,并通过对齐两个视图的表示来同时捕获文本图上的结构与语义知识。 |
| 2024/7/31 | 殷珺 | Arxiv.2405 | Vector Quantization for Recommender Systems: A Review and Outlook | https://arxiv.org/pdf/2405.03110 | 推荐系统中的向量量化技术综述,主要内容包括经典向量量化技术简介、推荐系统中的向量量化以及向量随机量化技术简介。向量量化旨在将大规模的数据向量表示压缩到可学习的小规模编码本(codebook),被用于加速最近邻搜索、注意力机制等。近年来,其离散化索引能力受到关注,与生成式推荐系统紧密结合,作为推荐系统索引模块(indexer)得到广泛应用。 |
| 2024/7/31 | 程铭 | KDD2024 | Spatio-temporal Early Prediction based on Multi-objective Reinforcement Learning | https://arxiv.org/pdf/2406.04035 | 在预测任务中,准确性和时效性往往难以同时优化。过早得到的预测结果可能会导致较高的误报率,而能够获取更多信息的延迟预测可能会使结果失去时效性。在野外火灾、犯罪和交通拥堵等现实场景中,及时预测对于保障人类生命财产安全至关重要。因此,平衡准确性和时效性是研究的一大热点。本文提出了一种基于多目标强化学习的时空早期预测模型,该模型可以根据偏好实施最优策略,或基于少量样本推断偏好。该模型解决了两个主要挑战:1)提高早期预测的准确性;2)提供了一种能够确定每个区域最优预测时间的优化策略。 |
| 2024/8/28 | 刘若尘 | SIGIR2024 | Enhancing Sequential Recommenders with Augmented Knowledge from Aligned Large Language Models | https://dl.acm.org/doi/pdf/10.1145/3626772.3657782 | 传统序列推荐往往只考虑user-item之间的交互信息,而没有利用关于item的真实世界的知识,将大语言模型应用于序列推荐中则可以弥补了这一缺陷。然而由于大语言模型缺乏对序列行为模型的建模能力以及推理时间过长,如何将LLM用于序列推荐仍然是一个挑战。为解决上述挑战,本文提出了一种将传统ID-based 序列推荐与LLM相结合的方法,使用LLM来为item生成与交互相对齐的文本,以此达到语义增强的效果。 |
| 2024/9/4 | 王梓辰 | AAAI2024 | BAT: Behavior-Aware Human-Like Trajectory Prediction for Autonomous Driving | https://arxiv.org/pdf/2312.06371 | 在实现全自动驾驶汽车的过程中,准确预测周围车辆的未来轨迹是一项重大挑战。本文提出了一种新的行为感知轨迹预测模型(BAT),由行为感知、交互感知、优先级感知和位置感知模块组成,可以感知潜在交互,从而在不严格分类驾驶行为的情况下实现更高级的学习。本文最关键的贡献是,使用动态几何图方法,并设计行为感知准则,不需要在训练过程中人工标注行为类别,解决了离散行为分类和选择适当时间窗口的挑战。实验结果表明,模型在NGSIM、highD、RounD和MoCAD四个数据集上均实现了较好的效果,即使在减少训练数据部分(25%)的情况下,模型也优于大多数baseline。 |
| 2024/9/11 | 徐榕桧 | KDD2024 | PATE: Proximity-Aware Time series anomaly Evaluation | https://arxiv.org/pdf/2405.12096 | 评估时间序列数据中的异常检测算法至关重要,因为在实时分析和数据驱动策略至关重要的各个领域,不准确性可能导致错误的决策。传统的性能指标无法捕捉复杂的时间动态和时间序列异常的特定特征,例如早期检测和延迟检测。本文提出了一种结合预测与异常区间间时间关系的新型评价指标——邻近感知时间序列异常评价(PATE)。PATE使用基于邻近度的加权方法,考虑异常区间周围的缓冲区,从而能够对探测进行更详细、更明智的评估。实验表明,许多SOTA模型可能在某些场景下被过度评估,而PATE能够提供更公平的模型比较,从而指导未来的研究朝着开发更有效和更准确的检测模型的方向发展。 |
| 2024/9/18 | 于志刚 | KDD2024 | GAugLLM: Improving Graph Contrastive Learning for Text-Attributed Graphs with Large Language Models | https://arxiv.org/pdf/2406.11945 | GAugLLM,它是为文本属性图设计的,能够利用丰富的文本属性和 LLM 联合进行特征级和边级扰动。它分为两个模块:混合 prompt 专家的方法通过直接干扰输入文本属性来生成增强特征;协同边修改器方案利用文本属性进行结构扰动。 |
| 2024/9/18 | 吴微 | Arxiv.2312 | Towards an end-to-end artificial intelligence driven global weather forecasting system | https://arxiv.org/pdf/2312.12462v1 | 该文章提出了一种基于AI的数据同化模型Adas,并和先进的基于AI的预报模型风乌结合,构建了首个端到端的基于AI的全球天气预报系统:FengWu-Adas。Adas引入了置信度矩阵,使用门控卷积处理稀疏观测,并使用门控交叉注意力捕获背景与观测之间的相互作用。研究表明,Adas能够同化全球观测数据,产生高质量的分析,使系统能够长期稳定运行。 |
| 2024/9/25 | 杨珂懿 | ICDE2024 | PrivShape: Extracting Shapes in Time Series under User-Level Local Differential Privacy | https://arxiv.org/pdf/2404.03873 | 本文提出了 PrivShape,这是一种基于用户级别 LDP 的 Trie 机制,用于保护所有元素。PrivShape 首先转换时间序列以减少其长度,然后采用 Trie 扩展和两级细化来提高效用。通过对真实世界数据集的大量实验,我们证明了 PrivShape 在适应离线使用时优于 PatternLDP,并且可以有效地提取频繁的形状。 |
| 2024/10/10 | 曹敏君 | ICLR2024 | Biased Temporal Convolution Graph Network for Time Series Forecasting with Missing Values | https://openreview.net/forum?id=O9nZCwdGcG | 本文提出了一种偏置时间卷积图网络BiTGraph,同时捕获时间依赖性和空间结构。多元时间序列预测在气象研究、交通管理和经济规划等各种应用中发挥着重要作用。在过去的几十年中,通过探索时间动态和空间相关性,人们为实现准确可靠的预测做出了许多努力。特别是,近年来基于 Transformer 的方法的发展显着提高了长期预测的准确性。现有的预测方法通常假设输入数据完整,然而,在实践中,由于设备故障或昂贵的数据采集,时间序列数据经常被部分观察到,这可能严重阻碍现有方法的性能。简单地使用插补方法不可避免地会涉及误差积累并导致次优解决方案。本文提出的BiTGraph方法将偏差注入到两个精心开发的模块中(多尺度实例 PartialTCN 和偏差 GCN)以解释缺失的模式。实验结果表明提出的模型能够达到在五个真实世界基准数据集上相比现有方法提高9.93%。论文和代码大家感兴趣可以先了解一下。 |
| 2024/10/17 | 程铭 | KDD2023 | Hierarchical Reinforcement Learning for Dynamic Autonomous Vehicle Navigation at Intelligent Intersections | https://dl.acm.org/doi/pdf/10.1145/3580305.3599839 | 文章提出了 NavTL,这是一种基于学习的框架,用于人类驾驶车辆和自动驾驶汽车共存的混合交通场景中联合控制交通信号计划和自动车辆重新路由。其目标是通过最大限度地减少十字路口的拥堵,同时引导自动驾驶车辆避开暂时拥堵的道路,从而提高出行效率并减少总出行时间。 |
| 2024/10/24 | 颜浩 | Arxiv.2410 | GraphCLIP: Enhancing Transferability in Graph Foundation Models for Text-Attributed Graphs | https://arxiv.org/pdf/2410.10329 | 本文提出的GraphCLIP通过将图与文本总结进行对比预训练来学习图基础模型。主要包括利用LLM生成大规模"图-总结"配对数据,并设计了一种图提示方法来减轻少样本学习场景下的灾难性遗忘。 |
| 2024/10/31 | 殷珺 | Arxiv.2405 | NoteLLM-2: Multimodal Large Representation Models for Recommendation | https://arxiv.org/pdf/2405.16789 | 本文探索基于大语言模型的多模态推荐系统,尝试采用:(1)multi-modal in-context learning和(2)late fusion来缓解现有模型过度依赖文本、忽视视觉信息的问题。 |
| 2024/10/31 | 金志晗 | Nature 2023 | Skilful nowcasting of extreme precipitation with NowcastNet | https://www.nature.com/articles/s41586-023-06184-4 | 这篇文章于2023年7月发表于Nature正刊,清华大学与中国气象局基于极端降水预报目前的方法容易出现强度、位置误差的问题,提出NowcastNet,它将物理演化方案和生成式学习统一到具有端到端预测优化的神经网络中,效果上超越了谷歌的短临降水预报模型DGMR。 |
| 2024/11/07 | 刘若尘 | KDD 2023 | Generative Flow Network for Listwise Recommendation | https://dl.acm.org/doi/abs/10.1145/3580305.3599364 | List-wise recommendation旨在将呈现在用户面前的所有item作为一个整体向用户进行推荐。研究表明,基于列表的方式能通过建模列表内部的物品关联性提升推荐效果。然而,探索组合列表空间具有挑战性,使用交叉熵损失的方法可能存在多样性不足的问题。本文提出GFN4Rec,一种生成式方法,借鉴流网络的思想以确保列表生成概率与其奖励的对齐。其核心优势在于对数尺度奖励匹配损失,提升了生成的多样性,并通过自回归物品选择模型捕捉物品间的相互影响和列表的未来奖励。感兴趣的同学可以提前浏览论文 |
| 2024/11/28 | 徐榕桧 | KDD 2023 | DM-PFL: Hitchhiking Generic Federated Learning for Efficient Shift-Robust Personalization | https://dl.acm.org/doi/pdf/10.1145/3580305.3599311| 个性化联邦学习上的已有方法容易受到客户端本地训练和测试数据之间分布变化的影响,并且在本地设备上涉及高训练工作量。为了克服这些缺点,作者探索了联邦学习的高效漂移鲁棒(Shift-Robust)个性化。其原理是搭全局模型的便车,并以最小的额外训练开销来提高个性化模型的移位鲁棒性。具体来说,作者提出了一种新的框架DM-PFL,它利用双重掩蔽机制来训练全局和个性化模型,实现权重级参数共享和端到端稀疏训练。|
| 2024/12/05 | 王梓辰 | CVPR 2024 | SocialCircle: Learning the Angle-based Social Interaction Representation for Pedestrian Trajectory Prediction | https://arxiv.org/pdf/2310.05370 | 在预测行人轨迹的时候,行人之间社会交互的多样性和不确定性使预测具有很大的挑战性。目前轨迹预测中建模交互的方法可以分为基于模型的和无模型的两类,基于模型的方法以特定的“规则”作为预测的主要基础,无模型的方法主要由数据驱动的方式进行预测。然而,基于模型的方式难以设计一个适合大多数社会交互的通用“规则”,无模型的方式严重依赖不同的网络结构,缺乏可解释性。本篇文章受海洋动物在水下通过回声定位同伴位置的启发,提出基于角度的可训练社会交互表示法,即SocialCircle,用于持续反应目标行人的不同角度方向上的社会交互环境。实验表明,SocialCircle 不仅在提高了预测性能,而且定性分析实验表明它有助于在预测行人轨迹时更好地模拟社会互动,从而与人类的直觉保持一致。|
| 2024/12/12 | 于志刚 | NeurIPS 2024 | Beyond Redundancy: Information-aware Unsupervised Multiplex Graph Structure Learning | https://arxiv.org/abs/2409.17386 | 现实图数据通常包含大量与任务无关的噪声,严重影响了无监督多重图学习(UMGL)的性能。此外,现有的方法主要依赖于对比学习来最大化不同视图之间的互信息,这将它们限制于多视图冗余场景,在非冗余场景有一定的局限性。本文聚焦于如何以无监督的方式从原始多重图中学习融合图,从而减轻与任务无关的噪声,同时保留充分的任务相关信息,提出了一种信息感知的无监督多重图融合框架(InfoMGF)。|
| 2024/12/19 | 杨珂懿 | NeurIPS 2024 | Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation | https://arxiv.org/abs/2402.14744 | 本文介绍了一种使用大型语言模型(LLM)集成到智能体框架中的新颖方法,以实现灵活高效的个人移动轨迹生成。LLM通过有效处理语义数据并提供对各种任务建模的多功能性,克服了以前模型的局限性。本文的方法解决了将LLM与现实世界城市交通数据结合起来的迫切需求,重点关注三个研究问题:将LLM与丰富的移动数据结合起来,制定可靠的移动生成策略,以及探索LLM在城市交通中的应用。关键的技术贡献是提出了一个新颖的LLM智能体框架,该框架考虑了个人移动模式和动机,包括使LLM与现实世界移动数据保持一致的自洽方法以及用于生成可解释移动的检索增强策略。在实验研究中,使用真实世界数据进行综合验证。这项研究标志着基于现实世界人类移动数据设计用于移动生成的 LLM 智能体框架的开创性工作,为城市交通分析提供了一个有前景的工具。|
| 2024/12/26 | 曹敏君 | NeurIPS 2024 | AutoTimes:Autoregressive Time Series Forecasters via Large Language Models | https://arxiv.org/abs/2402.02370 | 作者团队来自清华大学。本文探索基于大语言模型的自回归时间序列预测,(1)提出AutoTimes将 LLM 重新用作自回归时间序列预测器,该预测器可以处理灵活的序列长度并实现与流行模型一样的有竞争力的性能。(2)提出了token-wise prompting,利用相应的时间戳使该方法适用于多模态场景。(3)AutoTimes继承了LLM的零样本和上下文学习能力。|
| 2025/01/09 | 颜浩 | NeurIPS 2023 | MeZO: Fine-Tuning Language Models with Just Forward Passes | https://arxiv.org/abs/2305.17333 | 随着LMs大小的增长,通过一般化的反向传播算法来微调模型需要大量内存。本文基于零阶优化算法,改进了一种内存高效的零阶优化器MeZO,使得能以与推理相同的内存占用进行微调。与上下文学习、线性探测、全参数微调等进行了比较,验证了算法的高效性与有效性。 |
| 2025/01/09 | 胡杨 | AAAI 2025 | STD-PLM: Understanding Both Spatial and Temporal Properties of Spatial-Temporal Data with PLM| https://arxiv.org/abs/2407.09096 | 本文提出了一种基于预训练语言模型的空间-时间数据理解框架(STD-PLM),能够同时执行空间-时间预测与数据补全任务。STD-PLM 通过专门设计的spatial and temporal tokenizer来理解空间和时间的关联关系。此外,设计了拓扑感知的节点嵌入,使预训练语言模型能够理解并利用数据的拓扑结构。为缓解预训练语言模型带来的效率问题,本文设计了一种sandglass attention module,这显著提升了模型的效率,同时保证了性能。 |
| 2025/01/16 | 程铭 | TKDE 2024 | FELight: Fairness-Aware Traffic Signal Control via Sample-Efficient Reinforcement Learning | https://ieeexplore.ieee.org/document/10471901/ | 本文提出了一种提出了一种公平意识强、样本效率高的交通信号控制方法FELight。具体来说,首先设计一种新颖的公平性指标并将其集成到决策过程中,通过设置激活公平机制的阈值来惩罚高延迟的情况。与其他公平性研究的理论比较证明了我们的公平性为何以及何时可以带来优势。此外,采用反事实数据增强来丰富交互数据,提高了FELight的样本效率。引入自监督状态表示以从原始状态中提取信息特征,进一步提高样本效率。在真实交通数据集上进行的实验表明,与最先进的方法相比,FELight 提供了相对更公平的交通信号控制,同时又不影响性能。 |
| 2025/02/21 | 曹敏君 | ICML 2024 | Timer: Generative Pre-trained Transformers Are Large Time Series Models | https://arxiv.org/abs/2402.02368 | 本文探索了基于大规模预训练模型的时间序列分析方法,提出了 Timer 模型,旨在通过预训练提升时间序列任务的性能,尤其是在数据稀缺场景下。(1)作者构建了包含高达 10 亿时间点 的大规模时间序列数据集,并提出了 单序列序列(S3)格式,将异构时间序列统一为单序列格式,以便更好地进行预训练。(2)Timer 采用类似 GPT 的架构,通过预测下一个时间点(标记)来进行预训练,并将时间序列的预测、填补缺失值和异常检测等任务统一为一个生成任务。(3)Timer 继承了大规模预训练模型的泛化能力和可扩展性,在少样本情况下展现出与流行任务专用模型相媲美甚至更优的性能。 |
| 2025/02/28 | 刘若尘 | WWW 2025 | SPRec: Self-Play to Debias LLM-based Recommendation | https://arxiv.org/abs/2412.09243 | 现有的基于大语言模型的推荐方法往往使用有监督微调的方式进行训练,而在SFT的基础之上,DPO由于其能够直接将LLM与用户的兴趣进行对齐,所以受到关注。而本文发现,使用DPO的微调方法天然地会加剧推荐是的偏移,使模型更加青睐于高流行度的物品。为了解决这个问题,本文设计了一个自我博弈框架,具体来说,在每次自我博弈迭代中,模型首先进行SFT步骤,然后进行DPO步骤,将离线交互数据视为正样本,并将前一次迭代的预测输出视为负样本。通过使用模型的logits重新加权DPO损失函数,从而自适应地抑制偏倚项。 |
| 2025/03/07 | 杨珂懿 | AAAI 2025 | Holistic Semantic Representation for Navigational Trajectory Generation | https://arxiv.org/abs/2501.02737 | 本文提出了一个用于导航轨迹生成的全息语义表示(HOlistic SEmantic Representation,HOSER)框架。给定一个起终点(OD)对和潜在轨迹的起始时间点,该框架首先提出了一个道路网络编码器来扩展道路级和区域级语义的感受野。其次,设计了一个多粒度轨迹编码器来整合生成轨迹的时空语义,同时考虑点级和轨迹级特征。最后,采用目的地导向导航器来无缝集成目的地导向引导。在三个真实世界数据集上的广泛实验表明,HOSER在性能上显著优于最先进的基准方法。此外,该模型在少样本学习和零样本学习场景下的表现进一步验证了全息语义表示的有效性。这项研究为轨迹生成领域提供了一个重要的多层次语义理解框架,有助于生成更符合实际导航行为的高质量轨迹。 |
| 2025/03/14 | 徐榕桧 | NeurIPS 2024 | From Similarity to Superiority: Channel Clustering for Time Series Forecasting | https://arxiv.org/abs/2404.01340 | 时序数据建模策略可以分为通道独立策略(CI)和通道依赖策略(CD)。CI策略忽视了通道之间可能必要的交互作用,且在未见实例上的泛化能力较差。CD策略虽然可以捕捉复杂的通道关系,但容易导致过度平滑问题,限制了预测准确性。本文提出了一种自适应通道聚类模块(CCM)来提高时间序列预测模型的性能,CCM通过动态聚类具有内在相似性的通道,并结合这两种策略的优点。CCM的框架主要包括聚类分配器和基于聚类的前馈网络。聚类分配器根据内在相似性学习通道聚类,并通过交叉注意机制生成每个聚类的原型嵌入。CCM能够为每个聚类分配一个单独的前馈网络,并利用相似性进行加权聚合得到预测输出。此外,CCM通过学习聚类原型可以实现零样本预测,使得模型在处理未见样本时不需要重新训练。 |
| 2025/03/28 | 罗子罗文 | AAAI 2024 | Fast Machine Unlearning without Retraining through Selective Synaptic Dampening | https://arxiv.org/abs/2308.07707 | 随着当今机器学习模型规模日益庞大,其(预)训练数据集增长到难以理解的体量,人们越来越关注“机器遗忘”这一概念——旨在无需从头重新训练模型的情况下,通过编辑手段移除私有数据、过时知识、受版权保护内容、有害/不安全信息、危险能力及错误信息等不必要元素。然而,现有的一些MU方法需要对模型部分参数进行重训,开销往往很大,同时需要过往模型训练的权重参数以及训练数据已知,这在实际应用场景中往往难以支撑。鉴于此,本文提出了一种基于选择性参数抑制(Selective Synaptic Dampening)的两阶段方法 ,该方法无需重新训练,快速简单,只需要选择少量参数进行修改,即可完成高效遗忘。首先,该方法基于 Fisher information matrix 计算参数与遗忘数据集的相关性,然后基于稀疏性约束从中抑制与数据集高相关性的参数量,从而达成对Unlearning Request的快速处理。充分实验表明,无需重新训练的 SSD 方法在 Resnet18 以及 ViT 上几种不同种类的遗忘请求的基准测试中表现与重新训练的模型一致。 |
| 2025/04/11 | 于志刚 | KDD 2024 | Representation Learning of Temporal Graphs with Structural Roles | https://dl.acm.org/doi/10.1145/3637528.3671854 | 结构角色是指节点在图中的连接模式和结构特征。具有相似连接模式的节点被认为属于同一结构角色。现有的大多数时序图方法通常基于局部连接邻近性生成节点表示,忽略了全局结构相似性,没有发挥节点的结构角色带来的积极作用。简单来说,节点的表示学习不仅仅依赖于与之相邻的节点,那些与目标节点具有相似结构特征的不相邻的节点也可能会对目标节点的表示学习有一定帮助。因此,本文将全局结构角色信息引入时序图的表示学习中,能够充分利用时序图中的全局结构角色信息,以克服现有方法在捕捉全局结构相似性方面的不足。 |
| 2025/04/18 | 赵远博 | ICLR 2025 | Language Representations Can Be What Recommeders Need: Findings and Potentials | https://arxiv.org/abs/2407.05441 | 大量研究已经证明语言模型能够在少量语义信息上编码丰富的知识。然而在推荐系统领域,语言模型能否编码用户偏好信息仍然是未知的。与通常认为的语言模型和传统推荐模型学习到的是两种不同的表示相反,这一工作重新检验了这一理解并探索直接从语言表示空间中提取推荐空间,最终导向语言表示空间和行为空间的同态。基于此,本文提出只依赖语言表示而不使用ID信息的推荐算法。具体来说,只使用MLP、图卷积和InfoNCE loss等关键组件就能够搭建出简单而有效的模型。大量实验证明在多个数据集上该模型的表现优于领先的协同过滤模型,并且能够提供对item表示的较好初始化,极强的zero-shot性能和用户感知能力。这一发现从实验上证明了语言建模和行为建模之间的联系。 |
| 2025/04/25 | 王梓辰 | ECCV 2024 | Progressive Pretext Task Learning for Human Trajectory Prediction | https://arxiv.org/abs/2407.11588 | 人类轨迹预测旨在预测行人的未来轨迹,通常涵盖从短期到长期的所有时间范围。然而现有工作往往使用统一的训练范式来处理整个轨迹预测任务,忽略了人类轨迹中短期动态和长期依赖的区别。为了解决这个问题,本文提出了一种新颖的渐进式预训练任务学习框架,该框架逐步增强模型捕捉短期动态与长期依赖的能力,以实现最终的轨迹预测。该模型框架PPT包含三个阶段的训练任务:第一阶段,模型通过逐步的下一位置预测任务来学习短期动态;第二阶段,模型进一步通过终点预测任务增强对长期依赖的理解;第三阶段,模型在前两个阶段所学知识的基础上,解决整体轨迹预测任务。 |
| 2025/05/09 | 张安奇 | |
|
dskong07/charging-cord-classification-dataset
|
dskong07
|
2025-03-06T02:03:59Z
| 16 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-03-06T02:00:58Z
| 0 |
---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
splits:
- name: train
num_bytes: 43122970.0
num_examples: 46
download_size: 43113728
dataset_size: 43122970.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
MisterMango23/GTA_4_Dataset
|
MisterMango23
|
2025-05-25T05:11:41Z
| 0 | 0 |
[
"license:artistic-2.0",
"size_categories:n<1K",
"format:text",
"modality:image",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-05-25T05:11:11Z
| 0 |
---
license: artistic-2.0
---
|
aadityap/ttt-clipped-training-step-2-buffer
|
aadityap
|
2025-02-24T07:53:40Z
| 15 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-02-24T07:53:38Z
| 0 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: difficulty
dtype: int64
- name: problem_uid
dtype: string
- name: step
dtype: int64
splits:
- name: train
num_bytes: 8728668.173258003
num_examples: 400
- name: test
num_bytes: 22297
num_examples: 1
download_size: 2355448
dataset_size: 8750965.173258003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Fahim18/dataset-video-10
|
Fahim18
|
2024-11-15T16:49:45Z
| 16 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-15T16:45:03Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 1936814309.796483
num_examples: 1553
- name: test
num_bytes: 235307266.51802263
num_examples: 195
- name: validation
num_bytes: 238504673.12049434
num_examples: 194
download_size: 2109542272
dataset_size: 2410626249.435
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
rosbotmay/MNLP_M3_dpo_dataset
|
rosbotmay
|
2025-06-10T19:19:49Z
| 163 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-30T14:59:27Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 658494801
num_examples: 145704
download_size: 336658081
dataset_size: 658494801
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/asm2asm_O0_500000_risc_2
|
ahmedheakl
|
2024-10-25T15:15:27Z
| 65 | 0 |
[
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-10-25T15:14:52Z
| 0 |
---
dataset_info:
features:
- name: x86
dtype: string
- name: risc
dtype: string
splits:
- name: train
num_bytes: 899237327
num_examples: 249852
download_size: 237774010
dataset_size: 899237327
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dakhli/test9
|
dakhli
|
2025-05-26T15:42:49Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-26T15:42:45Z
| 0 |
---
dataset_info:
features:
- name: marque_de_voiture
dtype: string
- name: modele
dtype: string
- name: nom_de_lecu
dtype: string
- name: equation_utilisee
dtype: string
splits:
- name: train
num_bytes: 7621
num_examples: 136
download_size: 4457
dataset_size: 7621
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tphage/beam_dataset_0502_eval
|
tphage
|
2025-05-02T23:07:11Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-02T23:07:07Z
| 0 |
---
dataset_info:
features:
- name: Image
dtype: image
- name: Question
dtype: string
- name: BeamDescription
dtype: string
- name: CauseEffect
dtype: string
- name: ResponseDescription
dtype: string
- name: Answer
dtype: string
splits:
- name: train
num_bytes: 6119891.0
num_examples: 100
download_size: 5693231
dataset_size: 6119891.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
math-ai/AutoMathText
|
math-ai
|
2025-05-15T14:24:38Z
| 55,210 | 172 |
[
"task_categories:text-generation",
"task_categories:question-answering",
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"modality:text",
"arxiv:2402.07625",
"region:us",
"mathematical-reasoning",
"reasoning",
"finetuning",
"pretraining",
"llm"
] |
[
"text-generation",
"question-answering"
] |
2024-01-24T01:39:26Z
| 0 |
---
language:
- en
license: cc-by-sa-4.0
size_categories:
- 10B<n<100B
task_categories:
- text-generation
- question-answering
pretty_name: AutoMathText
configs:
- config_name: web-0.50-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- data/web/0.65-0.70.jsonl
- data/web/0.60-0.65.jsonl
- data/web/0.55-0.60.jsonl
- data/web/0.50-0.55.jsonl
default: true
- config_name: web-0.60-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- data/web/0.65-0.70.jsonl
- data/web/0.60-0.65.jsonl
- config_name: web-0.70-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- data/web/0.75-0.80.jsonl
- data/web/0.70-0.75.jsonl
- data/web/0.65-0.70.jsonl
- data/web/0.60-0.65.jsonl
- config_name: web-0.80-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- data/web/0.85-0.90.jsonl
- data/web/0.80-0.85.jsonl
- config_name: web-full
data_files: data/web/*.jsonl
- config_name: arxiv-0.50-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- data/arxiv/0.50-0.60/*.jsonl
- config_name: arxiv-0.60-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- config_name: arxiv-0.70-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- config_name: arxiv-0.80-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- config_name: arxiv-full
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- data/arxiv/0.80-0.90/*.jsonl
- data/arxiv/0.70-0.80/*.jsonl
- data/arxiv/0.60-0.70/*.jsonl
- data/arxiv/0.50-0.60/*.jsonl
- data/arxiv/0.00-0.50/*.jsonl
- config_name: code-0.50-to-1.00
data_files:
- split: train
path:
- data/code/agda/0.95-1.00.jsonl
- data/code/agda/0.90-0.95.jsonl
- data/code/agda/0.85-0.90.jsonl
- data/code/agda/0.80-0.85.jsonl
- data/code/agda/0.75-0.80.jsonl
- data/code/agda/0.70-0.75.jsonl
- data/code/agda/0.65-0.70.jsonl
- data/code/agda/0.60-0.65.jsonl
- data/code/agda/0.55-0.60.jsonl
- data/code/agda/0.50-0.55.jsonl
- data/code/c/0.95-1.00.jsonl
- data/code/c/0.90-0.95.jsonl
- data/code/c/0.85-0.90.jsonl
- data/code/c/0.80-0.85.jsonl
- data/code/c/0.75-0.80.jsonl
- data/code/c/0.70-0.75.jsonl
- data/code/c/0.65-0.70.jsonl
- data/code/c/0.60-0.65.jsonl
- data/code/c/0.55-0.60.jsonl
- data/code/c/0.50-0.55.jsonl
- data/code/cpp/0.95-1.00.jsonl
- data/code/cpp/0.90-0.95.jsonl
- data/code/cpp/0.85-0.90.jsonl
- data/code/cpp/0.80-0.85.jsonl
- data/code/cpp/0.75-0.80.jsonl
- data/code/cpp/0.70-0.75.jsonl
- data/code/cpp/0.65-0.70.jsonl
- data/code/cpp/0.60-0.65.jsonl
- data/code/cpp/0.55-0.60.jsonl
- data/code/cpp/0.50-0.55.jsonl
- data/code/fortran/0.95-1.00.jsonl
- data/code/fortran/0.90-0.95.jsonl
- data/code/fortran/0.85-0.90.jsonl
- data/code/fortran/0.80-0.85.jsonl
- data/code/fortran/0.75-0.80.jsonl
- data/code/fortran/0.70-0.75.jsonl
- data/code/fortran/0.65-0.70.jsonl
- data/code/fortran/0.60-0.65.jsonl
- data/code/fortran/0.55-0.60.jsonl
- data/code/fortran/0.50-0.55.jsonl
- data/code/gap/0.95-1.00.jsonl
- data/code/gap/0.90-0.95.jsonl
- data/code/gap/0.85-0.90.jsonl
- data/code/gap/0.80-0.85.jsonl
- data/code/gap/0.75-0.80.jsonl
- data/code/gap/0.70-0.75.jsonl
- data/code/gap/0.65-0.70.jsonl
- data/code/gap/0.60-0.65.jsonl
- data/code/gap/0.55-0.60.jsonl
- data/code/gap/0.50-0.55.jsonl
- data/code/github-coq-train/0.95-1.00.jsonl
- data/code/github-coq-train/0.90-0.95.jsonl
- data/code/github-coq-train/0.85-0.90.jsonl
- data/code/github-coq-train/0.80-0.85.jsonl
- data/code/github-coq-train/0.75-0.80.jsonl
- data/code/github-coq-train/0.70-0.75.jsonl
- data/code/github-coq-train/0.65-0.70.jsonl
- data/code/github-coq-train/0.60-0.65.jsonl
- data/code/github-coq-train/0.55-0.60.jsonl
- data/code/github-coq-train/0.50-0.55.jsonl
- data/code/github-isabelle-train/0.95-1.00.jsonl
- data/code/github-isabelle-train/0.90-0.95.jsonl
- data/code/github-isabelle-train/0.85-0.90.jsonl
- data/code/github-isabelle-train/0.80-0.85.jsonl
- data/code/github-isabelle-train/0.75-0.80.jsonl
- data/code/github-isabelle-train/0.70-0.75.jsonl
- data/code/github-isabelle-train/0.65-0.70.jsonl
- data/code/github-isabelle-train/0.60-0.65.jsonl
- data/code/github-isabelle-train/0.55-0.60.jsonl
- data/code/github-isabelle-train/0.50-0.55.jsonl
- data/code/github-lean-train/0.95-1.00.jsonl
- data/code/github-lean-train/0.90-0.95.jsonl
- data/code/github-lean-train/0.85-0.90.jsonl
- data/code/github-lean-train/0.80-0.85.jsonl
- data/code/github-lean-train/0.75-0.80.jsonl
- data/code/github-lean-train/0.70-0.75.jsonl
- data/code/github-lean-train/0.65-0.70.jsonl
- data/code/github-lean-train/0.60-0.65.jsonl
- data/code/github-lean-train/0.55-0.60.jsonl
- data/code/github-lean-train/0.50-0.55.jsonl
- data/code/github-MATLAB-train/0.95-1.00.jsonl
- data/code/github-MATLAB-train/0.90-0.95.jsonl
- data/code/github-MATLAB-train/0.85-0.90.jsonl
- data/code/github-MATLAB-train/0.80-0.85.jsonl
- data/code/github-MATLAB-train/0.75-0.80.jsonl
- data/code/github-MATLAB-train/0.70-0.75.jsonl
- data/code/github-MATLAB-train/0.65-0.70.jsonl
- data/code/github-MATLAB-train/0.60-0.65.jsonl
- data/code/github-MATLAB-train/0.55-0.60.jsonl
- data/code/github-MATLAB-train/0.50-0.55.jsonl
- data/code/haskell/0.95-1.00.jsonl
- data/code/haskell/0.90-0.95.jsonl
- data/code/haskell/0.85-0.90.jsonl
- data/code/haskell/0.80-0.85.jsonl
- data/code/haskell/0.75-0.80.jsonl
- data/code/haskell/0.70-0.75.jsonl
- data/code/haskell/0.65-0.70.jsonl
- data/code/haskell/0.60-0.65.jsonl
- data/code/haskell/0.55-0.60.jsonl
- data/code/haskell/0.50-0.55.jsonl
- data/code/idris/0.95-1.00.jsonl
- data/code/idris/0.90-0.95.jsonl
- data/code/idris/0.85-0.90.jsonl
- data/code/idris/0.80-0.85.jsonl
- data/code/idris/0.75-0.80.jsonl
- data/code/idris/0.70-0.75.jsonl
- data/code/idris/0.65-0.70.jsonl
- data/code/idris/0.60-0.65.jsonl
- data/code/idris/0.55-0.60.jsonl
- data/code/idris/0.50-0.55.jsonl
- data/code/isa_proofsteps/0.95-1.00.jsonl
- data/code/isa_proofsteps/0.90-0.95.jsonl
- data/code/isa_proofsteps/0.85-0.90.jsonl
- data/code/isa_proofsteps/0.80-0.85.jsonl
- data/code/isa_proofsteps/0.75-0.80.jsonl
- data/code/isa_proofsteps/0.70-0.75.jsonl
- data/code/isa_proofsteps/0.65-0.70.jsonl
- data/code/isa_proofsteps/0.60-0.65.jsonl
- data/code/isa_proofsteps/0.55-0.60.jsonl
- data/code/isa_proofsteps/0.50-0.55.jsonl
- data/code/julia/0.95-1.00.jsonl
- data/code/julia/0.90-0.95.jsonl
- data/code/julia/0.85-0.90.jsonl
- data/code/julia/0.80-0.85.jsonl
- data/code/julia/0.75-0.80.jsonl
- data/code/julia/0.70-0.75.jsonl
- data/code/julia/0.65-0.70.jsonl
- data/code/julia/0.60-0.65.jsonl
- data/code/julia/0.55-0.60.jsonl
- data/code/julia/0.50-0.55.jsonl
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- data/code/jupyter-notebook/0.55-0.60.jsonl
- data/code/jupyter-notebook/0.50-0.55.jsonl
- data/code/lean_proofsteps/0.95-1.00.jsonl
- data/code/lean_proofsteps/0.90-0.95.jsonl
- data/code/lean_proofsteps/0.85-0.90.jsonl
- data/code/lean_proofsteps/0.80-0.85.jsonl
- data/code/lean_proofsteps/0.75-0.80.jsonl
- data/code/lean_proofsteps/0.70-0.75.jsonl
- data/code/lean_proofsteps/0.65-0.70.jsonl
- data/code/lean_proofsteps/0.60-0.65.jsonl
- data/code/lean_proofsteps/0.55-0.60.jsonl
- data/code/lean_proofsteps/0.50-0.55.jsonl
- data/code/maple/0.95-1.00.jsonl
- data/code/maple/0.90-0.95.jsonl
- data/code/maple/0.85-0.90.jsonl
- data/code/maple/0.80-0.85.jsonl
- data/code/maple/0.75-0.80.jsonl
- data/code/maple/0.70-0.75.jsonl
- data/code/maple/0.65-0.70.jsonl
- data/code/maple/0.60-0.65.jsonl
- data/code/maple/0.55-0.60.jsonl
- data/code/maple/0.50-0.55.jsonl
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- data/code/python/0.55-0.60.jsonl
- data/code/python/0.50-0.55.jsonl
- data/code/r/0.95-1.00.jsonl
- data/code/r/0.90-0.95.jsonl
- data/code/r/0.85-0.90.jsonl
- data/code/r/0.80-0.85.jsonl
- data/code/r/0.75-0.80.jsonl
- data/code/r/0.70-0.75.jsonl
- data/code/r/0.65-0.70.jsonl
- data/code/r/0.60-0.65.jsonl
- data/code/r/0.55-0.60.jsonl
- data/code/r/0.50-0.55.jsonl
- data/code/tex/0.95-1.00.jsonl
- data/code/tex/0.90-0.95.jsonl
- data/code/tex/0.85-0.90.jsonl
- data/code/tex/0.80-0.85.jsonl
- data/code/tex/0.75-0.80.jsonl
- data/code/tex/0.70-0.75.jsonl
- data/code/tex/0.65-0.70.jsonl
- data/code/tex/0.60-0.65.jsonl
- data/code/tex/0.55-0.60.jsonl
- data/code/tex/0.50-0.55.jsonl
- config_name: code-python-0.50-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- data/code/python/0.55-0.60.jsonl
- data/code/python/0.50-0.55.jsonl
- config_name: code-python-0.60-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- data/code/python/0.65-0.70.jsonl
- data/code/python/0.60-0.65.jsonl
- config_name: code-python-0.70-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- data/code/python/0.75-0.80.jsonl
- data/code/python/0.70-0.75.jsonl
- config_name: code-python-0.80-to-1.00
data_files:
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- data/code/python/0.90-0.95.jsonl
- data/code/python/0.85-0.90.jsonl
- data/code/python/0.80-0.85.jsonl
- config_name: code-jupyter-notebook-0.50-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- data/code/jupyter-notebook/0.55-0.60.jsonl
- data/code/jupyter-notebook/0.50-0.55.jsonl
- config_name: code-jupyter-notebook-0.60-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- data/code/jupyter-notebook/0.65-0.70.jsonl
- data/code/jupyter-notebook/0.60-0.65.jsonl
- config_name: code-jupyter-notebook-0.70-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- data/code/jupyter-notebook/0.75-0.80.jsonl
- data/code/jupyter-notebook/0.70-0.75.jsonl
- config_name: code-jupyter-notebook-0.80-to-1.00
data_files:
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- data/code/jupyter-notebook/0.90-0.95.jsonl
- data/code/jupyter-notebook/0.85-0.90.jsonl
- data/code/jupyter-notebook/0.80-0.85.jsonl
- config_name: code-full
data_files:
- split: train
path:
- data/code/*/*.jsonl
tags:
- mathematical-reasoning
- reasoning
- finetuning
- pretraining
- llm
---
🎉 **This work, introducing the AutoMathText dataset and the AutoDS method, has been accepted to The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025 Findings)!** 🎉
# AutoMathText
**AutoMathText** is an extensive and carefully curated dataset encompassing around **200 GB** of mathematical texts. It's a compilation sourced from a diverse range of platforms including various websites, arXiv, and GitHub (OpenWebMath, RedPajama, Algebraic Stack). This rich repository has been **autonomously selected (labeled) by the state-of-the-art open-source language model**, Qwen-72B. Each piece of content in the dataset is assigned **a score `lm_q1q2_score` within the range of [0, 1]**, reflecting its relevance, quality and educational value in the context of mathematical intelligence.
GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText
ArXiv paper: https://huggingface.co/papers/2402.07625 (https://arxiv.org/abs/2402.07625)
## Objective
The primary aim of the **AutoMathText** dataset is to provide a comprehensive and reliable resource for a wide array of users - from academic researchers and educators to AI practitioners and mathematics enthusiasts. This dataset is particularly geared towards:
- Facilitating advanced research in **the intersection of mathematics and artificial intelligence**.
- Serving as an educational tool for **learning and teaching complex mathematical concepts**.
- Providing **a foundation for developing and training AI models** specialized in processing and understanding **mathematical content**.
## Configs
```YAML
configs:
- config_name: web-0.50-to-1.00
data_files:
- split: train
path:
- data/web/0.95-1.00.jsonl
- data/web/0.90-0.95.jsonl
- ...
- data/web/0.50-0.55.jsonl
default: true
- config_name: web-0.60-to-1.00
- config_name: web-0.70-to-1.00
- config_name: web-0.80-to-1.00
- config_name: web-full
data_files: data/web/*.jsonl
- config_name: arxiv-0.50-to-1.00
data_files:
- split: train
path:
- data/arxiv/0.90-1.00/*.jsonl
- ...
- data/arxiv/0.50-0.60/*.jsonl
- config_name: arxiv-0.60-to-1.00
- config_name: arxiv-0.70-to-1.00
- config_name: arxiv-0.80-to-1.00
- config_name: arxiv-full
data_files: data/arxiv/*/*.jsonl
- config_name: code-0.50-to-1.00
data_files:
- split: train
path:
- data/code/*/0.95-1.00.jsonl
- ...
- data/code/*/0.50-0.55.jsonl
- config_name: code-python-0.50-to-1.00
- split: train
path:
- data/code/python/0.95-1.00.jsonl
- ...
- data/code/python/0.50-0.55.jsonl
- config_name: code-python-0.60-to-1.00
- config_name: code-python-0.70-to-1.00
- config_name: code-python-0.80-to-1.00
- config_name: code-jupyter-notebook-0.50-to-1.00
- split: train
path:
- data/code/jupyter-notebook/0.95-1.00.jsonl
- ...
- data/code/jupyter-notebook/0.50-0.55.jsonl
- config_name: code-jupyter-notebook-0.60-to-1.00
- config_name: code-jupyter-notebook-0.70-to-1.00
- config_name: code-jupyter-notebook-0.80-to-1.00
- config_name: code-full
data_files: data/code/*/*.jsonl
```
How to load data:
```python
from datasets import load_dataset
ds = load_dataset("math-ai/AutoMathText", "web-0.50-to-1.00") # or any valid config_name
```
## Features
- **Volume**: Approximately 200 GB of text data (in natural language and programming language).
- **Content**: A diverse collection of mathematical texts, including but not limited to research papers, educational articles, and code documentation.
- **Labeling**: Every text is **scored** by Qwen-72B, a sophisticated language model, ensuring a high standard of relevance and accuracy.
- **Scope**: Covers a wide spectrum of mathematical topics, making it suitable for various applications in advanced research and education.
## References
- OpenWebMath [[link]](https://huggingface.co/datasets/open-web-math/open-web-math)
- RedPajama [[link]](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- Algebraick Stack [[link]](https://huggingface.co/datasets/EleutherAI/proof-pile-2) (a subset of Proof-Pile-2)
## Citation
We appreciate your use of **AutoMathText** in your work. If you find this repository helpful, please consider citing it and star this repo. Feel free to contact [email protected] or open an issue if you have any questions (GitHub homepage: https://github.com/yifanzhang-pro/AutoMathText).
```bibtex
@article{zhang2025autonomous,
title={Autonomous Data Selection with Zero-shot Generative Classifiers for Mathematical Texts},
author={Zhang, Yifan and Luo, Yifan and Yuan, Yang and Yao, Andrew Chi-Chih},
journal={The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025 Findings)},
year={2025}
}
```
|
TAUR-dev/BFEval_RC_VarFix_pv_v2_all_tasks-eval_rl
|
TAUR-dev
|
2025-09-22T00:30:40Z
| 81 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-09-21T22:04:17Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: task_config
dtype: string
- name: task_source
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_responses
list: 'null'
- name: model_responses__eval_is_correct
list: 'null'
- name: all_other_columns
dtype: string
- name: original_split
dtype: string
- name: acronym
dtype: string
- name: answer_index
dtype: int64
- name: answer_key
dtype: string
- name: choices
struct:
- name: label
list: string
- name: text
list: string
- name: difficulty
dtype: string
- name: domain
dtype: string
- name: evaluation_type
dtype: string
- name: expected_answer_format
dtype: string
- name: formed_acronym
dtype: string
- name: id
dtype: string
- name: length
dtype: int64
- name: letters
dtype: string
- name: metadata
dtype: string
- name: original_answer
dtype: string
- name: prompt__few_shot
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: task_type
dtype: string
- name: variant
dtype: string
- name: word_count
dtype: int64
- name: words
list: string
- name: model_responses__best_of_n_atags
list: string
- name: model_responses__best_of_n_atags__finish_reason_length_flags
list: bool
- name: model_responses__best_of_n_atags__length_partial_responses
list: string
- name: prompt__best_of_n_atags__metadata
dtype: string
- name: model_responses__best_of_n_atags__metadata
dtype: string
- name: model_responses__best_of_n_atags__eval_is_correct
list: bool
- name: model_responses__best_of_n_atags__eval_extracted_answers
list: string
- name: model_responses__best_of_n_atags__eval_extraction_metadata
dtype: string
- name: model_responses__best_of_n_atags__eval_evaluation_metadata
dtype: string
- name: model_responses__best_of_n_atags__internal_answers__eval_is_correct
list:
list: bool
- name: model_responses__best_of_n_atags__internal_answers__eval_extracted_answers
list:
list: string
- name: model_responses__best_of_n_atags__internal_answers__eval_extraction_metadata
dtype: string
- name: model_responses__best_of_n_atags__internal_answers__eval_evaluation_metadata
dtype: string
- name: model_responses__best_of_n_atags__metrics
struct:
- name: flips_by
list: int64
- name: flips_total
dtype: int64
- name: num_correct
dtype: int64
- name: pass_at_n
dtype: int64
- name: percent_correct
dtype: float64
- name: skill_count
struct:
- name: answer_revision
list: int64
- name: best_of_n
list: int64
- name: reflect_close
list: int64
- name: reflect_open
list: int64
- name: reflection_sbon
list: int64
- name: sample_close
list: int64
- name: sample_open
list: int64
- name: vote_close
list: int64
- name: vote_open
list: int64
- name: voting
list: int64
- name: total_responses
dtype: int64
- name: eval_date
dtype: string
- name: question_idx
dtype: int64
- name: response_idx
dtype: int64
- name: original_response_idx_in_16
dtype: int64
- name: bf_prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: original_response
dtype: string
- name: model_responses__bf_continuation
list: string
- name: model_responses__bf_continuation__finish_reason_length_flags
list: bool
- name: model_responses__bf_continuation__length_partial_responses
list: string
- name: bf_prompt__bf_continuation__metadata
struct:
- name: api_url
dtype: string
- name: backend
dtype: string
- name: chat_template_applied
dtype: bool
- name: generation_params
struct:
- name: chat_template_applied
dtype: bool
- name: max_tokens
dtype: int64
- name: n
dtype: int64
- name: repetition_penalty
dtype: float64
- name: temperature
dtype: float64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: model_name
dtype: string
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: model_responses__bf_continuation__metadata
struct:
- name: backend
dtype: string
- name: model_name
dtype: string
- name: n_responses
dtype: int64
- name: model_responses__budget_forced
list: string
- name: model_responses__budget_forced__eval_is_correct
list: bool
- name: model_responses__budget_forced__eval_extracted_answers
list: string
- name: model_responses__budget_forced__eval_extraction_metadata
dtype: string
- name: model_responses__budget_forced__eval_evaluation_metadata
dtype: string
- name: model_responses__budget_forced__internal_answers__eval_is_correct
list:
list: bool
- name: model_responses__budget_forced__internal_answers__eval_extracted_answers
list:
list: string
- name: model_responses__budget_forced__internal_answers__eval_extraction_metadata
dtype: string
- name: model_responses__budget_forced__internal_answers__eval_evaluation_metadata
dtype: string
- name: model_responses__budget_forced__metrics
struct:
- name: flips_by
list: int64
- name: flips_total
dtype: int64
- name: num_correct
dtype: int64
- name: pass_at_n
dtype: int64
- name: percent_correct
dtype: float64
- name: skill_count
struct:
- name: answer_revision
list: int64
- name: best_of_n
list: int64
- name: reflect_close
list: int64
- name: reflect_open
list: int64
- name: reflection_sbon
list: int64
- name: sample_close
list: int64
- name: sample_open
list: int64
- name: vote_close
list: int64
- name: vote_open
list: int64
- name: voting
list: int64
- name: total_responses
dtype: int64
splits:
- name: train
num_bytes: 1921299601
num_examples: 13164
download_size: 585417470
dataset_size: 1921299601
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
leinad-deinor/redeIT-xml-alpaca
|
leinad-deinor
|
2024-11-27T11:24:09Z
| 16 | 0 |
[
"language:it",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-25T08:57:23Z
| 0 |
---
language:
- it
license: apache-2.0
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.2_num-company_2_dataset_0_for_gen_13
|
HungVu2003
|
2025-04-08T06:16:48Z
| 16 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-08T01:00:23Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 969164
num_examples: 11250
download_size: 634539
dataset_size: 969164
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bgereke89/TwinLlama-3.1-8B-results
|
bgereke89
|
2024-11-18T20:52:42Z
| 66 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-11-18T17:20:00Z
| 0 |
---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: prompt
dtype: string
- name: answers
dtype: string
- name: evaluation
struct:
- name: accuracy
struct:
- name: analysis
dtype: string
- name: score
dtype: int64
- name: style
struct:
- name: analysis
dtype: string
- name: score
dtype: int64
- name: accuracy
dtype: int64
- name: style
dtype: int64
splits:
- name: test
num_bytes: 33069
num_examples: 10
download_size: 37070
dataset_size: 33069
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
jablonkagroup/mol2svg-multimodal
|
jablonkagroup
|
2025-05-12T03:08:57Z
| 136 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-25T09:44:55Z
| 0 |
---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
- name: SMILES
dtype: string
- name: __index_level_0__
dtype: string
- name: split
dtype: string
- name: SMILES_ORIGINAL
dtype: string
- name: IMAGE
dtype: image
- name: SELFIES
dtype: string
- name: InChIKey
dtype: string
- name: IUPAC
dtype: string
- name: template_original
dtype: string
- name: template
dtype: string
splits:
- name: train
num_bytes: 25004175.375
num_examples: 1045
download_size: 5328662
dataset_size: 25004175.375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
capa2000/binary
|
capa2000
|
2025-06-23T09:14:04Z
| 0 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-06-23T09:12:08Z
| 0 |
---
license: apache-2.0
---
|
hibana2077/IP102
|
hibana2077
|
2025-06-18T14:07:07Z
| 0 | 0 |
[
"license:mit",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-06-18T13:56:47Z
| 0 |
---
license: mit
---
# Cite
```bib
@inproceedings{Wu2019Insect,
title={IP102: A Large-Scale Benchmark Dataset for Insect Pest Recognition},
author={Xiaoping Wu and Chi Zhan and Yukun Lai and Ming-Ming Cheng and Jufeng Yang},
booktitle={IEEE CVPR},
pages={8787--8796},
year={2019},
}
```
|
thailevann/Government_services_DPO_v4
|
thailevann
|
2025-06-02T03:58:12Z
| 22 | 0 |
[
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-06-02T01:23:12Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 21422686
num_examples: 27466
download_size: 6563277
dataset_size: 21422686
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AveMujica/CostalSeg-MM
|
AveMujica
|
2025-09-29T21:41:56Z
| 0 | 0 |
[
"license:apache-2.0",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[] |
2025-09-29T21:25:16Z
| 0 |
---
license: apache-2.0
---
|
Villaitech/argentina-news
|
Villaitech
|
2025-04-21T08:58:49Z
| 47 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-04-20T02:51:51Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: document
dtype: string
- name: metadata
struct:
- name: authors
sequence: string
- name: image_url
dtype: string
- name: publish_date
dtype: string
- name: source
dtype: string
- name: summary
dtype: string
- name: title
dtype: string
- name: url
dtype: string
splits:
- name: '2025_04_20'
num_bytes: 4099762
num_examples: 1081
download_size: 2326061
dataset_size: 4099762
configs:
- config_name: default
data_files:
- split: '2025_04_20'
path: data/2025_04_20-*
---
|
CohenQu/deepscalar_RL_easy_1
|
CohenQu
|
2025-05-22T04:32:10Z
| 0 | 0 |
[
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-22T02:33:05Z
| 0 |
---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: reward
dtype: float64
splits:
- name: train
num_bytes: 859000
num_examples: 3500
- name: test
num_bytes: 88795
num_examples: 350
download_size: 67715
dataset_size: 947795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
studymakesmehappyyyyy/mmlu
|
studymakesmehappyyyyy
|
2024-10-03T03:12:18Z
| 14 | 0 |
[
"license:mit",
"region:us"
] |
[] |
2024-10-03T03:08:52Z
| 0 |
---
license: mit
---
下载于https://github.com/Helw150/mmlu
---
This file contains the dev, val, and test data for our multitask test.
---
This file also contains auxiliary_training data.
---
|
mehuldamani/openbookqa_fixed
|
mehuldamani
|
2025-05-19T18:45:50Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-19T14:53:53Z
| 0 |
---
dataset_info:
features:
- name: id
dtype: string
- name: question_stem
dtype: string
- name: choices
sequence:
- name: text
dtype: string
- name: label
dtype: string
- name: answerKey
dtype: string
- name: problem
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 146255
num_examples: 300
- name: validation
num_bytes: 149910
num_examples: 300
- name: test
num_bytes: 144144
num_examples: 300
download_size: 228191
dataset_size: 440309
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
xbilek25/train_30s_speed_0point1_2520_3360
|
xbilek25
|
2025-05-11T09:36:46Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T09:36:30Z
| 0 |
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 713511520.0
num_examples: 840
download_size: 641796849
dataset_size: 713511520.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reasoning-proj/j_c_dfiltered_DeepSeek-R1-Distill-Qwen-7B_mneutral_insert_random_characters_t90
|
reasoning-proj
|
2025-05-11T04:41:19Z
| 0 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2025-05-11T04:41:13Z
| 0 |
---
dataset_info:
features:
- name: question
dtype: string
- name: answer_content
dtype: string
- name: reference_answer
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: question_license
dtype: string
- name: question_source
dtype: string
- name: model_name
dtype: string
- name: verifier_score
dtype: int64
- name: mutated_answer_content
dtype: string
- name: continuation_1
dtype: string
- name: complete_answer_1
dtype: string
- name: continuation_2
dtype: string
- name: complete_answer_2
dtype: string
- name: continuation_3
dtype: string
- name: complete_answer_3
dtype: string
- name: continuation_4
dtype: string
- name: complete_answer_4
dtype: string
- name: continuation_5
dtype: string
- name: complete_answer_5
dtype: string
- name: continuation_6
dtype: string
- name: complete_answer_6
dtype: string
- name: continuation_7
dtype: string
- name: complete_answer_7
dtype: string
- name: continuation_8
dtype: string
- name: complete_answer_8
dtype: string
- name: continuation_model
dtype: string
- name: verifier_score_1
dtype: int64
- name: verifier_score_2
dtype: int64
- name: verifier_score_3
dtype: int64
- name: verifier_score_4
dtype: int64
- name: verifier_score_5
dtype: int64
- name: verifier_score_6
dtype: int64
- name: verifier_score_7
dtype: int64
- name: verifier_score_8
dtype: int64
splits:
- name: train
num_bytes: 100686134
num_examples: 600
download_size: 37977338
dataset_size: 100686134
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
lecslab/porc-llama3_1_1b-v1
|
lecslab
|
2024-12-18T23:00:57Z
| 19 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-12-17T23:45:21Z
| 0 |
---
dataset_info:
features:
- name: story
dtype: string
- name: generated_text_1
dtype: string
- name: generated_text_2
dtype: string
- name: mic_chosen
dtype: int64
- name: mar_chosen
dtype: int64
- name: ali_chosen
dtype: int64
- name: index
dtype: int64
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 75999.67741935483
num_examples: 65
- name: test
num_bytes: 32738.322580645163
num_examples: 28
download_size: 92284
dataset_size: 108738.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
alvations/c4p0-x1-en-ja
|
alvations
|
2024-03-24T03:55:23Z
| 23,170 | 0 |
[
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] |
[] |
2024-03-23T09:54:37Z
| 0 |
---
dataset_info:
features:
- name: source
dtype: string
- name: target
dtype: string
- name: target_backto_source
dtype: string
- name: raw_target
list:
- name: generated_text
dtype: string
- name: raw_target_backto_source
list:
- name: generated_text
dtype: string
- name: prompt
dtype: string
- name: reverse_prompt
dtype: string
- name: source_langid
dtype: string
- name: target_langid
dtype: string
- name: target_backto_source_langid
dtype: string
- name: doc_id
dtype: int64
- name: sent_id
dtype: int64
- name: timestamp
dtype: string
- name: url
dtype: string
- name: doc_hash
dtype: string
splits:
- name: train
num_bytes: 49764
num_examples: 42
download_size: 37636
dataset_size: 49764
configs:
- config_name: default
data_files:
- split: train
path: 66034f82c5c65ae4/train-*
---
|
Abooooo/ACG
|
Abooooo
|
2024-11-10T08:25:26Z
| 19 | 0 |
[
"task_categories:audio-classification",
"language:zh",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
[
"audio-classification"
] |
2024-11-10T07:19:09Z
| 0 |
---
license: mit
task_categories:
- audio-classification
language:
- zh
- en
---
基于Codecfake(AISHELL3 + VCTK),分别使用GPT-SoVIT和ChatTTS合成对应的伪造音频,用于伪造音频识别等任务,支持中文和英文。
|
Yuyeong/rw_amazon-ratings_nbw_1_public_masked
|
Yuyeong
|
2025-05-24T17:17:06Z
| 0 | 0 |
[
"region:us"
] |
[] |
2025-05-24T17:12:57Z
| 0 |
---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': '0'
'1': '1'
'2': '2'
'3': '3'
'4': '4'
- name: group_idx
dtype: int64
- name: node_idx
dtype: int64
- name: train_0
dtype: bool
- name: validation_0
dtype: bool
- name: test_0
dtype: bool
- name: train_1
dtype: bool
- name: validation_1
dtype: bool
- name: test_1
dtype: bool
- name: train_2
dtype: bool
- name: validation_2
dtype: bool
- name: test_2
dtype: bool
- name: train_3
dtype: bool
- name: validation_3
dtype: bool
- name: test_3
dtype: bool
- name: train_4
dtype: bool
- name: validation_4
dtype: bool
- name: test_4
dtype: bool
- name: train_5
dtype: bool
- name: validation_5
dtype: bool
- name: test_5
dtype: bool
- name: train_6
dtype: bool
- name: validation_6
dtype: bool
- name: test_6
dtype: bool
- name: train_7
dtype: bool
- name: validation_7
dtype: bool
- name: test_7
dtype: bool
- name: train_8
dtype: bool
- name: validation_8
dtype: bool
- name: test_8
dtype: bool
- name: train_9
dtype: bool
- name: validation_9
dtype: bool
- name: test_9
dtype: bool
splits:
- name: train
num_bytes: 4173194690
num_examples: 2449200
download_size: 2079561538
dataset_size: 4173194690
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 845