Skip to content

Commit 9f2a24a

Browse files
committed
Add small utility and paper.
1 parent 0686b5b commit 9f2a24a

File tree

2 files changed

+72
-0
lines changed

2 files changed

+72
-0
lines changed

_publications/add_from_arxiv.py

Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
#!/bin/python3
2+
3+
import argparse
4+
import arxiv
5+
import re
6+
import os
7+
import textwrap
8+
9+
10+
def _first_non_stopword(title: str) -> str:
11+
for word in re.split("\W", title.lower()):
12+
if word in ("a", "an", "the", "is", "are", "what", "who", "your"):
13+
continue
14+
return word
15+
raise ValueError(f'The title seems to have only stopwords! "{title}"')
16+
17+
18+
def _author_lastname(author_name: str) -> str:
19+
return author_name.split(" ")[-1].lower()
20+
21+
22+
def get_info(paper_id: str, out_dir: str) -> None:
23+
search = arxiv.Search(id_list=[paper_id])
24+
paper = next(search.results())
25+
26+
summary = (
27+
paper.summary.replace("\n\n", "@@--@@")
28+
.replace("\n", " ")
29+
.replace("@@--@@", "\n\n")
30+
)
31+
32+
tmpl = textwrap.dedent(
33+
f"""
34+
---
35+
layout: publication
36+
title: "{paper.title}"
37+
authors: {", ".join(a.name for a in paper.authors)}
38+
conference:
39+
year: {paper.published.year}
40+
additional_links:
41+
- {{name: "ArXiV", url: "https://arxiv.org/abs/{paper_id}"}}
42+
tags: ["TODO"]
43+
---
44+
{summary}
45+
"""
46+
)
47+
48+
filename = f"{_author_lastname(paper.authors[0].name)}{paper.published.year}{_first_non_stopword(paper.title)}.markdown"
49+
with open(os.path.join(out_dir, filename), "w") as f:
50+
f.write(tmpl)
51+
52+
print(f'Output at: {filename}')
53+
54+
if __name__ == "__main__":
55+
parser = argparse.ArgumentParser()
56+
parser.add_argument("paper_id", help="The id of the paper to retrieve.")
57+
parser.add_argument("out_path", help="The path to output the file.")
58+
args = parser.parse_args()
59+
60+
get_info(args.paper_id, args.out_path)

_publications/liu2023code.markdown

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
2+
---
3+
layout: publication
4+
title: "Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation"
5+
authors: Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, Lingming Zhang
6+
conference:
7+
year: 2023
8+
additional_links:
9+
- {name: "ArXiV", url: "https://arxiv.org/abs/2305.01210"}
10+
tags: ["evaluation"]
11+
---
12+
Program synthesis has been long studied with recent approaches focused on directly using the power of Large Language Models (LLMs) to generate code according to user intent written in natural language. Code evaluation datasets, containing curated synthesis problems with input/output test-cases, are used to measure the performance of various LLMs on code synthesis. However, test-cases in these datasets can be limited in both quantity and quality for fully assessing the functional correctness of the generated code. Such limitation in the existing benchmarks begs the following question: In the era of LLMs, is the code generated really correct? To answer this, we propose EvalPlus -- a code synthesis benchmarking framework to rigorously evaluate the functional correctness of LLM-synthesized code. In short, EvalPlus takes in the base evaluation dataset and uses an automatic input generation step to produce and diversify large amounts of new test inputs using both LLM-based and mutation-based input generators to further validate the synthesized code. We extend the popular HUMANEVAL benchmark and build HUMANEVAL+ with 81x additionally generated tests. Our extensive evaluation across 14 popular LLMs demonstrates that HUMANEVAL+ is able to catch significant amounts of previously undetected wrong code synthesized by LLMs, reducing the pass@k by 15.1% on average! Moreover, we even found several incorrect ground-truth implementations in HUMANEVAL. Our work not only indicates that prior popular code synthesis evaluation results do not accurately reflect the true performance of LLMs for code synthesis but also opens up a new direction to improve programming benchmarks through automated test input generation.

0 commit comments

Comments
 (0)