Test It Before You Trust It: Applying Software Testing for Trustworthy In-Context Learning
Issued Date
2026-01-01
Resource Type
ISSN
03029743
eISSN
16113349
Scopus ID
2-s2.0-105010821204
Journal Title
Lecture Notes in Computer Science
Volume
15836 LNCS
Start Page
243
End Page
258
Rights Holder(s)
SCOPUS
Bibliographic Citation
Lecture Notes in Computer Science Vol.15836 LNCS (2026) , 243-258
Suggested Citation
Racharak T., Ragkhitwetsagul C., Sontesadisai C., Sunetnanta T. Test It Before You Trust It: Applying Software Testing for Trustworthy In-Context Learning. Lecture Notes in Computer Science Vol.15836 LNCS (2026) , 243-258. 258. doi:10.1007/978-3-031-97141-9_17 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/114416
Title
Test It Before You Trust It: Applying Software Testing for Trustworthy In-Context Learning
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
In-context learning (ICL) has emerged as a powerful capability of large language models (LLMs), enabling them to perform new tasks based on a few provided examples without explicit fine-tuning. Despite their impressive adaptability, these models remain vulnerable to subtle adversarial perturbations and exhibit unpredictable behavior when faced with linguistic variations. Inspired by software testing principles, we introduce a software testing-inspired framework, called MMT4NL, for evaluating the trustworthiness of in-context learning by utilizing adversarial perturbations and software testing techniques. It includes diverse evaluation aspects of linguistic capabilities for testing the ICL capabilities of LLMs. MMT4NL is built around the idea of crafting metamorphic adversarial examples from a test set in order to quantify and pinpoint bugs in the designed prompts of ICL. Our philosophy is to treat any LLM as software and validate its functionalities just like testing the software. Finally, we demonstrate applications of MMT4NL on the sentiment analysis and question-answering tasks. Our experiments could reveal various linguistic bugs in state-of-the-art LLMs.
