Publication:
Kernel mean embedding of distributions: A review and beyond

dc.contributor.authorKrikamol Muandeten_US
dc.contributor.authorKenji Fukumizuen_US
dc.contributor.authorBharath Sriperumbuduren_US
dc.contributor.authorBernhard Schölkopfen_US
dc.contributor.otherMahidol Universityen_US
dc.contributor.otherMax Planck Institute for Intelligent Systemsen_US
dc.contributor.otherThe Institute of Statistical Mathematicsen_US
dc.contributor.otherPennsylvania State Universityen_US
dc.date.accessioned2018-12-21T07:23:51Z
dc.date.accessioned2019-03-14T08:03:28Z
dc.date.available2018-12-21T07:23:51Z
dc.date.available2019-03-14T08:03:28Z
dc.date.issued2017-01-01en_US
dc.description.abstract© 2017 K. Muandet, K. Fukumizu, B. Sriperumbudur and B. Schölkopf. A Hilbert space embedding of a distribution-in short, a kernel mean embedding-has recently emerged as a powerful tool for machine learning and statistical inference. The basic idea behind this framework is to map distributions into a reproducing kernel Hilbert space (RKHS) in which the whole arsenal of kernel methods can be extended to probability measures. It can be viewed as a generalization of the original "feature map" common to support vector machines (SVMs) and other kernel methods. In addition to the classical applications of kernel methods, the kernel mean embedding has found novel applications in fields ranging from probabilistic modeling to statistical inference, causal discovery, and deep learning. This survey aims to give a comprehensive review of existing work and recent advances in this research area, and to discuss challenging issues and open problems that could potentially lead to new research directions. The survey begins with a brief introduction to the RKHS and positive definite kernels which forms the backbone of this survey, followed by a thorough discussion of the Hilbert space embedding of marginal distributions, theoretical guarantees, and a review of its applications. The embedding of distributions enables us to apply RKHS methods to probability measures which prompts a wide range of applications such as kernel two-sample testing, independent testing, and learning on distributional data. Next, we discuss the Hilbert space embedding for conditional distributions, give theoretical insights, and review some applications. The conditional mean embedding enables us to perform sum, product, and Bayes' rules-which are ubiquitous in graphical model, probabilistic inference, and reinforcement learning- in a non-parametric way using this new representation of distributions. We then discuss relationships between this framework and other related areas. Lastly, we give some suggestions on future research directions.en_US
dc.identifier.citationFoundations and Trends in Machine Learning. Vol.10, No.1-2 (2017), 1-141en_US
dc.identifier.doi10.1561/2200000060en_US
dc.identifier.issn19358245en_US
dc.identifier.issn19358237en_US
dc.identifier.other2-s2.0-85030721843en_US
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/20.500.14594/42422
dc.rightsMahidol Universityen_US
dc.rights.holderSCOPUSen_US
dc.source.urihttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85030721843&origin=inwarden_US
dc.subjectComputer Scienceen_US
dc.titleKernel mean embedding of distributions: A review and beyonden_US
dc.typeReviewen_US
dspace.entity.typePublication
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85030721843&origin=inwarden_US

Files

Collections