Update README.md
Browse files
README.md
CHANGED
@@ -282,6 +282,7 @@ configs:
|
|
282 |
- split: test
|
283 |
path: data/test-*
|
284 |
---
|
|
|
285 |
# Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'
|
286 |
|
287 |
Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.
|
@@ -298,9 +299,8 @@ REPOCOD_Lite_Unified is a variation of REPOCOD-Lite that has a similar format as
|
|
298 |
|
299 |
* Check our [Leaderboard](https://lt-asset.github.io/REPOCOD/) for preliminary results using SOTA LLMs with RAG.
|
300 |
|
301 |
-
*
|
302 |
-
```
|
303 |
|
|
|
304 |
"instance_id": Instance ID in REPOCOD
|
305 |
"version": Version of REPOCOD
|
306 |
"gold_patches": {
|
@@ -322,13 +322,11 @@ REPOCOD_Lite_Unified is a variation of REPOCOD-Lite that has a similar format as
|
|
322 |
"code": Problem statement for code generation.
|
323 |
"test": Problem statement for test generation.
|
324 |
}
|
325 |
-
# "problem_statement_source": "repocod",
|
326 |
"environment_setup_commit": base commit
|
327 |
"evaluation": {
|
328 |
"FAIL_TO_PASS": list of relevant test cases
|
329 |
"PASS_TO_PASS": None, (all remaining tests that passes, we choose not to run the PASS_TO_PASS tests to avoid the computational cost)
|
330 |
}
|
331 |
-
|
332 |
```
|
333 |
|
334 |
## Citation
|
|
|
282 |
- split: test
|
283 |
path: data/test-*
|
284 |
---
|
285 |
+
|
286 |
# Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'
|
287 |
|
288 |
Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.
|
|
|
299 |
|
300 |
* Check our [Leaderboard](https://lt-asset.github.io/REPOCOD/) for preliminary results using SOTA LLMs with RAG.
|
301 |
|
|
|
|
|
302 |
|
303 |
+
```
|
304 |
"instance_id": Instance ID in REPOCOD
|
305 |
"version": Version of REPOCOD
|
306 |
"gold_patches": {
|
|
|
322 |
"code": Problem statement for code generation.
|
323 |
"test": Problem statement for test generation.
|
324 |
}
|
|
|
325 |
"environment_setup_commit": base commit
|
326 |
"evaluation": {
|
327 |
"FAIL_TO_PASS": list of relevant test cases
|
328 |
"PASS_TO_PASS": None, (all remaining tests that passes, we choose not to run the PASS_TO_PASS tests to avoid the computational cost)
|
329 |
}
|
|
|
330 |
```
|
331 |
|
332 |
## Citation
|