Tanawongsuwan R.Phongsuphap S.Mongkolwat P.Mahidol University2025-08-142025-08-142025-07-01Ecti Transactions on Computer and Information Technology Vol.19 No.3 (2025) , 392-405https://repository.li.mahidol.ac.th/handle/123456789/111612Convolutional neural networks (CNNs) have demonstrated impressive performance in image classification tasks but are often criticized for their black-box nature, which complicates understanding their decision-making and reliability. Transfer learning with pre-trained CNNs is a widely used approach for tasks with limited data. This study evaluates the performance and explainability of popular CNN models on flower image classification using two custom datasets, Flower-8-One and Flower-8-Zoom. Employing Explainable AI (XAI) techniques, such as Grad-CAM, this research visualizes CNN decision-making to uncover its alignment with human perception. A human study assesses trustworthiness by analyzing participants' confidence scores based on model visualizations. Results indicate strong CNN performance but highlight disparities between model-extracted features and human expectations. Among the models evaluated, Xception and Inception-v3 consistently earn higher trust ratings. These findings emphasize the necessity of XAI-driven evaluations to enhance trust and reliability in CNN-integrated systems, particularly in applications requiring human-computer interaction.Computer ScienceEngineeringDecision SciencesEvaluating Trust in CNN Transfer Learning with Flower Image Classification via Heatmap-Based XAIArticleSCOPUS10.37936/ecti-cit.2025193.2603202-s2.0-10501237884222869131