Bui M.P.Nguyen M.T.N.Mahidol University2025-11-022025-11-022025-01-01Journal of Physics Conference Series Vol.3114 No.1 (2025)17426588https://repository.li.mahidol.ac.th/handle/123456789/112896Understanding user-generated content (UGC) is crucial for obtaining actionable insights in domains such as e-commerce and hospitality. However, the noisy and redundant nature of such content present challenges for topic modeling methods like Latent Semantic Analysis (LSA). In this paper, we investigate whether preprocessing user reviews with large language models (LLMs) can improve topic modeling performance. Specifically, we compare two input variants: (1) raw reviews and (2) ChatGPT-generated summaries produced via API as concise keyphrases. We apply LSA with varimax rotation on each variant and evaluate the resulting topic models using multiple criteria, including topic coherence (cν), average pairwise Jaccard overlap, and cluster compactness via silhouette scores. Unlike prior work that employs LLMs primarily for post hoc topic labeling or interpretation, our method integrates an LLM directly into the preprocessing pipeline to reshape noisy input into structured, standardized summaries. While ChatGPT-based preprocessing results in lower cν coherence scores likely due to reduced lexical redundancy, it significantly improves topic separation, cluster quality, and topical specificity, leading to more interpretable and well-structured topic models overall.Physics and AstronomyHow Large Language Models Enhance Topic Modeling on User-Generated ContentConference PaperSCOPUS10.1088/1742-6596/3114/1/0120112-s2.0-10501974370317426596