Skip to content

Commit d188b54

Browse files
authored
Add Jesse and tag maintainance
1 parent 70e0c84 commit d188b54

File tree

2 files changed

+13
-2
lines changed

2 files changed

+13
-2
lines changed
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
---
2+
layout: publication
3+
title: "Large Language Models and Simple, Stupid Bugs"
4+
authors: Kevin Jesse, Toufique Ahmed, Premkumar T. Devanbu, Emily Morgan
5+
conference:
6+
year: 2023
7+
additional_links:
8+
- {name: "ArXiV", url: "https://arxiv.org/abs/2303.11455"}
9+
tags: ["Transformer", "defect"]
10+
---
11+
With the advent of powerful neural language models, AI-based systems to assist developers in coding tasks are becoming widely available; Copilot is one such system. Copilot uses Codex, a large language model (LLM), to complete code conditioned on a preceding "prompt". Codex, however, is trained on public GitHub repositories, viz., on code that may include bugs and vulnerabilities. Previous studies [1], [2] show Codex reproduces vulnerabilities seen in training. In this study, we examine how prone Codex is to generate an interesting bug category, single statement bugs, commonly referred to as simple, stupid bugs or SStuBs in the MSR community. We find that Codex and similar LLMs do help avoid some SStuBs, but do produce known, verbatim SStuBs as much as 2x as likely than known, verbatim correct code. We explore the consequences of the Codex generated SStuBs and propose avoidance strategies that suggest the possibility of reducing the production of known, verbatim SStubs, and increase the possibility of producing known, verbatim fixes.

_publications/saberi2023model.markdown

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ authors: Iman Saberi, Fateme H. Fard
55
conference: MSR
66
year: 2023
77
additional_links:
8-
- {name: "ArXiV", url: "https://arxiv.org/pdf/2303.06233"}
9-
tags: ["Adapters", "Pre-trained Programming Language", "Code Refinement", "Code Summarization"]
8+
- {name: "ArXiV", url: "https://arxiv.org/abs/2303.06233"}
9+
tags: ["Transformer", "repair", "summarization"]
1010
---
1111
Pre-trained Programming Language Models (PPLMs) achieved many recent states of the art results for many code-related software engineering tasks. Though some studies use data flow or propose tree-based models that utilize Abstract Syntax Tree (AST), most PPLMs do not fully utilize the rich syntactical information in source code. Still, the input is considered a sequence of tokens. There are two issues; the first is computational inefficiency due to the quadratic relationship between input length and attention complexity. Second, any syntactical information, when needed as an extra input to the current PPLMs, requires the model to be pre-trained from scratch, wasting all the computational resources already used for pre-training the current models. In this work, we propose Named Entity Recognition (NER) adapters, lightweight modules that can be inserted into Transformer blocks to learn type information extracted from the AST. These adapters can be used with current PPLMs such as CodeBERT, GraphCodeBERT, and CodeT5. We train the NER adapters using a novel Token Type Classification objective function (TTC). We insert our proposed work in CodeBERT, building CodeBERTER, and evaluate the performance on two tasks of code refinement and code summarization. CodeBERTER improves the accuracy of code refinement from 16.4 to 17.8 while using 20% of training parameter budget compared to the fully fine-tuning approach, and the BLEU score of code summarization from 14.75 to 15.90 while reducing 77% of training parameters compared to the fully fine-tuning approach.

0 commit comments

Comments
 (0)