Natural Language Explanation in Code Clone Detection using LLM-based Post Hoc Explainer
Issued Date
2025-01-01
Resource Type
ISSN
15301362
Scopus ID
2-s2.0-105035215249
Journal Title
Proceedings Asia Pacific Software Engineering Conference APSEC
Start Page
862
End Page
866
Rights Holder(s)
SCOPUS
Bibliographic Citation
Proceedings Asia Pacific Software Engineering Conference APSEC (2025) , 862-866
Suggested Citation
Racharak T., Ragkhitwetsagul C., Junplong C., Supratak A. Natural Language Explanation in Code Clone Detection using LLM-based Post Hoc Explainer. Proceedings Asia Pacific Software Engineering Conference APSEC (2025) , 862-866. 866. doi:10.1109/APSEC66846.2025.00093 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/116246
Title
Natural Language Explanation in Code Clone Detection using LLM-based Post Hoc Explainer
Author(s)
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
Recent studies highlight various machine learning (ML)-based techniques for code clone detection, which can be integrated into developer tools such as static code analysis. With the advancements brought by ML in code understanding, MLbased code clone detectors could accurately identify and classify cloned pairs, especially semantic clones, but often operate as black boxes, providing little insight into the decision-making process. Post hoc explainers, on the other hand, aim to interpret and explain the predictions of these ML models after they are made, offering a way to understand the underlying mechanisms driving the model's decisions. However, current post hoc techniques require white-box access to the ML model or are computationally expensive, indicating a need for advanced post hoc explainers. In this paper, we propose a novel framework that leverages the in-context learning capabilities of large language models to elucidate the predictions made by the ML-based code clone detectors. We perform a study using ChatGPT-4 to explain the code clone results inferred by GraphCodeBERT. We found that our approach is promising as a post hoc explainer by giving the correct explanations up to 98% and offering good explanations 95% of the time. Yet, the explanations and the code line examples given by the LLM are useful in some cases. We also found that lowering the temperature to zero helps increase the accuracy of the explanation. Lastly, we list the insights that can lead to further improvements in future work. This study paves the way for future studies in utilizing LLMs as a post hoc explainer for various software engineering tasks.
