Skip to content

Commit 4b70e76

Browse files
rajaswamallamanis
authored andcommitted
Added naik2022probing.markdown and updated contributor affiliation for Rajaswa Patil
1 parent a6f9148 commit 4b70e76

File tree

2 files changed

+14
-1
lines changed

2 files changed

+14
-1
lines changed
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
---
2+
layout: publication
3+
title: "Probing Semantic Grounding in Language Models of Code with Representational Similarity Analysis"
4+
authors: Shounak Naik, Rajaswa Patil, Swati Agarwal, Veeky Baths
5+
conference: International Conference on Advanced Data Mining and Applications (ADMA 2022)
6+
year: 2022
7+
additional_links:
8+
- {name: "ArXiV", url: "https://arxiv.org/abs/2207.07706"}
9+
- {name: "PDF", url: "https://link.springer.com/chapter/10.1007/978-3-031-22137-8_29"}
10+
- {name: "Code", url: "https://github.com/shounaknaik/Probing-Semantic-Grounding-in-Language-Models-of-Code-with-Representational-Similarity-Analysis"}
11+
tags: ["interpretability", "language model", "evaluation", "Transformer"]
12+
---
13+
Representational Similarity Analysis is a method from cognitive neuroscience, which helps in comparing representations from two different sources of data. In this paper, we propose using Representational Similarity Analysis to probe the semantic grounding in language models of code. We probe representations from the CodeBERT model for semantic grounding by using the data from the IBM CodeNet dataset. Through our experiments, we show that current pre-training methods do not induce semantic grounding in language models of code, and instead focus on optimizing form-based patterns. We also show that even a little amount of fine-tuning on semantically relevant tasks increases the semantic grounding in CodeBERT significantly. Our ablations with the input modality to the CodeBERT model show that using bimodal inputs (code and natural language) over unimodal inputs (only code) gives better semantic grounding and sample efficiency during semantic fine-tuning. Finally, our experiments with semantic perturbations in code reveal that CodeBERT is able to robustly distinguish between semantically correct and incorrect code.

index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,4 +74,4 @@ website. A comprehensive list can be found [here](https://github.com/ml4code/ml4
7474
* [Uri Alon](http://www.cs.technion.ac.il/~urialon/) Technion, Israel
7575
* [Shaked Brody](https://shakedbr.cswp.cs.technion.ac.il/) Technion, Israel
7676
* [Nghi D. Q. Bui](https://bdqnghi.github.io/) Singapore Management University, Singapore
77-
* [Rajaswa Patil](https://rajaswa.github.io/) TCS Research, India
77+
* [Rajaswa Patil](https://rajaswa.github.io/) Microsoft PROSE

0 commit comments

Comments
 (0)