Skip to content

Commit 950eeb8

Browse files
committed
Apply suggestions from code review
fix spacing
1 parent 8df12d6 commit 950eeb8

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

_publications/niu2022spt-code.markdown

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,4 +9,4 @@ additional_links:
99
- {name: "code", url: "https://github.com/NougatCA/SPT-Code"}
1010
tags: ["Transformer", "representation"]
1111
---
12-
Recent years have seen the successful application of large pre-trained modelsto code representation learning, resulting in substantial improvements on manycode-related downstream tasks. But there are issues surrounding theirapplication to SE tasks. First, the majority of the pre-trained models focus onpre-training only the encoder of the Transformer. For generation tasks that areaddressed using models with the encoder-decoder architecture, however, there isno reason why the decoder should be left out during pre-training. Second, manyexisting pre-trained models, including state-of-the-art models such asT5-learning, simply reuse the pre-training tasks designed for naturallanguages. Moreover, to learn the natural language description of source codeneeded eventually for code-related tasks such as code summarization, existingpre-training tasks require a bilingual corpus composed of source code and theassociated natural language description, which severely limits the amount ofdata for pre-training. To this end, we propose SPT-Code, a sequence-to-sequencepre-trained model for source code. In order to pre-train SPT-Code in asequence-to-sequence manner and address the aforementioned weaknessesassociated with existing pre-training tasks, we introduce three pre-trainingtasks that are specifically designed to enable SPT-Code to learn knowledge ofsource code, the corresponding code structure, as well as a natural languagedescription of the code without relying on any bilingual corpus, and eventuallyexploit these three sources of information when it is applied to downstreamtasks. Experimental results demonstrate that SPT-Code achieves state-of-the-artperformance on five code-related downstream tasks after fine-tuning.
12+
Recent years have seen the successful application of large pre-trained modelsto code representation learning, resulting in substantial improvements on many code-related downstream tasks. But there are issues surrounding theirapplication to SE tasks. First, the majority of the pre-trained models focus on pre-training only the encoder of the Transformer. For generation tasks that are addressed using models with the encoder-decoder architecture, however, there is no reason why the decoder should be left out during pre-training. Second, many existing pre-trained models, including state-of-the-art models such as T5-learning, simply reuse the pre-training tasks designed for natural languages. Moreover, to learn the natural language description of source code needed eventually for code-related tasks such as code summarization, existingpre-training tasks require a bilingual corpus composed of source code and the associated natural language description, which severely limits the amount of data for pre-training. To this end, we propose SPT-Code, a sequence-to-sequence pre-trained model for source code. In order to pre-train SPT-Code in a sequence-to-sequence manner and address the aforementioned weaknesses associated with existing pre-training tasks, we introduce three pre-training tasks that are specifically designed to enable SPT-Code to learn knowledge of source code, the corresponding code structure, as well as a natural language description of the code without relying on any bilingual corpus, and eventually exploit these three sources of information when it is applied to downstreamt asks. Experimental results demonstrate that SPT-Code achieves state-of-the-artperformance on five code-related downstream tasks after fine-tuning.

0 commit comments

Comments
 (0)