Challenges in Adopting LLaMA: An Empirical Study of Discussions on Stack Overflow
Issued Date
2024-01-01
Resource Type
ISSN
16130073
Scopus ID
2-s2.0-85213732467
Journal Title
CEUR Workshop Proceedings
Volume
3864
Start Page
35
End Page
42
Rights Holder(s)
SCOPUS
Bibliographic Citation
CEUR Workshop Proceedings Vol.3864 (2024) , 35-42
Suggested Citation
Deeprom R., Yang S., Higo Y., Choetkiertikul M., Ragkhitwetsagul C. Challenges in Adopting LLaMA: An Empirical Study of Discussions on Stack Overflow. CEUR Workshop Proceedings Vol.3864 (2024) , 35-42. 42. Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/102647
Title
Challenges in Adopting LLaMA: An Empirical Study of Discussions on Stack Overflow
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
LLaMA (Large Language Model Meta AI) has quickly gained traction among developers due to its wide-ranging applications and its capabilities to be integrated into software projects. As interest in LLaMA grows, discussions around it have surged on platforms like Stack Overflow. The developer community, with its collaborative nature, serves as a valuable source for studying LLaMA’s quality, its emerging trends, and insights into its usage. Despite this growing attention, there has been no comprehensive study examining how the community interacts with and discusses LLaMA. This study addresses that gap by exploring conversations on Stack Overflow related to LLaMA and its quality, with the objective of identifying key themes and recurring patterns in these discussions. We systematically collected and analyzed 473 posts from Stack Overflow that contained the keyword “LLaMA” or were tagged accordingly. The analysis revealed that prominent topics of discussion include model configuration, error handling, and integration with other technologies. Furthermore, we identified frequent co-occurring tags, underscoring LLaMA’s integration within the larger ecosystem of large language models and its interoperability with widely used frameworks, such as Python and Hugging Face Transformers. The findings highlight the complexity of working with LLaMA, especially in model configuration and fine-tuning, indicating a need for better resources, documentation, and community support. The study also suggests that future development should prioritize interoperability with popular machine-learning frameworks to improve the LLM’s quality and to strengthen LLaMA’s role in the AI ecosystem.