Yuenyong S.Buppodom N.Sangkaew K.Boonmeeprakob K.Boonkwan P.Jaroenkantasima J.Khlaisamniang P.Lertpiya A.Piyatumrong A.Rojratchadakorn P.Rugsujarit T.Saengsukhiran T.Saetan K.Sukprapa I.Thavornmongkol T.Thongthungwong N.Triamamornwooth P.Utupon C.Viriyayudhakorn K.Witchutanon P.Wongprayon S.Supnithi T.Mahidol University2025-02-122025-02-122024-01-0119th International Joint Symposium on Artificial Intelligence and Natural Language Processing, iSAI-NLP 2024 (2024)https://repository.li.mahidol.ac.th/handle/20.500.14594/104253Large language models (LLMs) play an important role in modern NLP technology as they are versatile for a wide array of NLP tasks. However, constructing an LLM is challenging due to concealed construction pipelines, the lack of cleansed datasets, and hyperparameter settings, making it almost irreproducible. This paper presents an efficient pipeline for constructing an LLM tailored to a low-to-medium-sourced language with a high level of data contamination and tools to cleanse the dataset. Following our pipeline, we constructed OpenThaiGPT, an LLM for Thai, with only open-sourced datasets such as CC100, OSCAR, and mC4, and achieved the state-of-the-art accuracies on our downstream tasks. Here, we disclosed the data statistics and all hyperparameter settings for reproducibility.Computer ScienceEngineeringDataDecon: Data Cleansing Tools for Large Language Model with Efficient Decontamination TechniquesConference PaperSCOPUS10.1109/iSAI-NLP64410.2024.107992782-s2.0-85216587816