원문정보
초록
영어
Relational classification (RC) is concerned with the application of statistical learning to relational data. RC models do not have improved stability to smooth the perturbations generated by variations in the correlation between the relational data. Therefore, few studies have attempted to derive a bound and develop a stability learning framework for RC models. To solve this problem, we derive a learning bound with a new measure dependence stability and a limited Vapnik–Chervonenkis (VC) dimension. Based on the learning bound, we then design a stable learning framework that serves as a guideline for the development of new learning algorithms for a broad class of RC models. Applying a Markov logic network on synthesized and real-world datasets, our experimental results demonstrate that our bound can be tight if the RC model has appropriate dependence stability and limited VC dimension and our learning framework increases the stability of RC models while reducing the deviation between empirical risk and true risk.
목차
1. Introduction
2. Preliminaries
2.1. Set up
2.2. Dependence Measures
2.3. Calculation of the Dependence Measures
3. Dependence Stability of RC Models
4. Generalization Bounds
4.1. Concentration Inequality
4.2. Dependence Stability Learning Bounds
5. Stable Learning Framework
5.1. Feasibility Analysis
5.2. Learning Framework Design
6. Experiments
6.1. Synthetic and Real Datasets
6.2. Dependence Stability Learning
7. Conclusion
8. Appendex
References