Ramasamy SaravanakumarHyung Soo KangChoon Ki AhnXiaojie SuHamid Reza KarimiChongqing UniversityPolitecnico di MilanoMahidol UniversityKunsan National UniversityKorea University2020-01-272020-01-272019-03-01IEEE Transactions on Neural Networks and Learning Systems. Vol.30, No.3 (2019), 913-922216223882162237X2-s2.0-85050997468https://repository.li.mahidol.ac.th/handle/20.500.14594/50642© 2012 IEEE. This paper examines the robust stabilization problem of continuous-time delayed neural networks via the dissipativity-learning approach. A new learning algorithm is established to guarantee the asymptotic stability as well as the (Q,S,R) - α -dissipativity of the considered neural networks. The developed result encompasses some existing results, such as H ∞ and passivity performances, in a unified framework. With the introduction of a Lyapunov-Krasovskii functional together with the Legendre polynomial, a novel delay-dependent linear matrix inequality (LMI) condition and a learning algorithm for robust stabilization are presented. Demonstrative examples are given to show the usefulness of the established learning algorithm.Mahidol UniversityComputer ScienceRobust Stabilization of Delayed Neural Networks: Dissipativity-Learning ApproachArticleSCOPUS10.1109/TNNLS.2018.2852807