リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

リケラボ 全国の大学リポジトリにある学位論文・教授論文を一括検索するならリケラボ論文検索大学・研究所にある論文を検索できる

大学・研究所にある論文を検索できる 「Learning on Hypergraphs with Sparsity」の論文概要。リケラボ論文検索は、全国の大学リポジトリにある学位論文・教授論文を一括検索できる論文検索サービスです。

コピーが完了しました

URLをコピーしました

論文の公開元へ論文の公開元へ
書き出し

Learning on Hypergraphs with Sparsity

Nguyen, Hao Canh Mamitsuka, Hiroshi 京都大学 DOI:10.1109/tpami.2020.2974746

2021.08

概要

Hypergraph is a general way of representing high-order relations on a set of objects. It is a generalization of graph, in which only pairwise relations can be represented. It finds applications in various domains where relationships of more than two objects are observed. On a hypergraph, as a generalization of graph, one wishes to learn a smooth function with respect to its topology. A fundamental issue is to find suitable smoothness measures of functions on the nodes of a graph/hypergraph. We show a general framework that generalizes previously proposed smoothness measures and also generates new ones. To address the problem of irrelevant or noisy data, we wish to incorporate sparse learning framework into learning on hypergraphs. We propose sparsely smooth formulations that learn smooth functions and induce sparsity on hypergraphs at both hyperedge and node levels. We show their properties and sparse support recovery results. We conduct experiments to show that our sparsely smooth models are beneficial to learning irrelevant and noisy data, and usually give similar or improved performances compared to dense models.

この論文で使われている画像

参考文献

1. O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning, 1st ed. The MIT Press, 2010.

2. Y. Ma and Y. Fu, Manifold Learning Theory and Applications, 1st ed. CRC Press, Inc., 2011.

3. D. J. Cook and L. B. Holder, Mining Graph Data. John Wiley & Sons, 2006.

4. M. Newman, Networks: An Introduction. Oxford University Press, Inc., 2010.

5. L. Sun, S. Ji, and J. Ye, “Hypergraph spectral learning for multi-label classification,” in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’08. ACM, 2008, pp. 668–676.

6. P. Ochs and T. Brox, “Higher order motion models and spectral clustering,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 614–621.

7. D. A. Weighill and D. A. Jacobson, “3-way networks: Application of hypergraphs for modelling increased complexity in comparative genomics,” PLoS Computational Biology, vol. 11, no. 3, 2015.

8. J. Bu, S. Tan, C. Chen, C. Wang, H. Wu, L. Zhang, and X. He, “Music recommendation by unified hypergraph: Combining social media information and music content,” in Proceedings of the 18th ACM International Conference on Multimedia, ser. MM ’10. ACM, 2010, pp. 391–400.

9. L. Getoor and B. Taskar, Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning). The MIT Press, 2007.

10. L. De Raedt, K. Kersting, and S. Natarajan, Statistical Relational Artificial Intelligence: Logic, Probability, and Computation. Morgan & Claypool Publishers, 2016.

11. P. Sen, G. M. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad, “Collective classification in network data,” AI Magazine, vol. 29, no. 3, pp. 93–106, 2008.

12. C. Loglisci, A. Appice, and D. Malerba, “Collective regression for handling autocorrelation of network data in a transductive setting,” Journal of Intelligent Information Systems, vol. 46, no. 3, pp. 447–472, 2016.

13. T. Pham, T. Tran, D. Q. Phung, and S. Venkatesh, “Column networks for collective classification,” in AAAI. AAAI Press, 2017, pp. 2485– 2491.

14. F. R. K. Chung, Spectral Graph Theory. American Mathematical Society, 1997.

15. X. Zhu, Z. Ghahramani, and J. D. Lafferty, “Semi-supervised learning using gaussian fields and harmonic functions,” in Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), 2003, pp. 912–919.

16. T. Bu¨ hler and M. Hein, “Spectral clustering based on the graph p- laplacian,” in Proceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09. ACM, 2009, pp. 81–88.

17. Y.-X. Wang, J. Sharpnack, A. Smola, and R. Tibshirani, “Trend Filtering on Graphs,” in Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, G. Lebanon and S. V. N. Vishwanathan, Eds., vol. 38. San Diego, California, USA: PMLR, 09–12 May 2015, pp. 1042–1050.

18. M. Belkin, P. Niyogi, and V. Sindhwani, “Manifold regularization: A geometric framework for learning from labeled and unlabeled examples,” Journal of Machine Learning Research, vol. 7, pp. 2399– 2434, Dec. 2006.

19. M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373–1396, Jun. 2003.

20. U. Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, Dec. 2007.

21. U. Von Luxburg, A. Radl, and M. Hein, “Hitting and commute times in large random neighborhood graphs,” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1751–1798, Jan. 2014.

22. R. Kyng, A. Rao, S. Sachdeva, and D. A. Spielman, “Algorithms for lipschitz learning on graphs,” in Proceedings of The 28th Conference on Learning Theory, COLT 2015, 2015, pp. 1190–1223.

23. J. Y. Zien, M. D. F. Schlag, and P. K. Chan, “Multilevel spectral hypergraph partitioning with arbitrary vertex sizes,” IEEE Trans. on CAD of Integrated Circuits and Systems, vol. 18, no. 9, pp. 1389–1399, 1999.

24. S. Agarwal, K. Branson, and S. Belongie, “Higher order learning with graphs,” in Proceedings of the 23rd International Conference on Machine Learning, ser. ICML ’06. ACM, 2006, pp. 17–24.

25. M. Hein, S. Setzer, L. Jost, and S. S. Rangapuram, “The total variation on hypergraphs - learning on hypergraphs revisited,” in Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013., 2013, pp. 2427–2435.

26. R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society (Series B), vol. 58, pp. 267–288, 1996.

27. J. Friedman, T. Hastie, H. Ho¨ fling, and R. Tibshirani, “Pathwise coordinate optimization,” Annals of Applied Statistics, vol. 1, no. 2, pp. 302–332, 2007.

28. M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” Journal of the Royal Statistical Society (Series B), vol. 68, pp. 49–67, 2006.

29. L. Jacob, G. Obozinski, and J.-P. Vert, “Group lasso with overlap and graph lasso,” in Proceedings of the 26th Annual International Conference on Machine Learning, ser. ICML ’09. ACM, 2009, pp. 433–440.

30. Y. Zhou, R. Jin, and S. C. H. Hoi, “Exclusive lasso for multi-task feature selection.” in AISTATS, ser. JMLR Proceedings, Y. W. Teh and D. M. Titterington, Eds., vol. 9. JMLR.org, 2010, pp. 988–995.

31. F. R. Chung, “The laplacian of a hypergraph,” AMS DIMACS Series in Discrete Mathematics and Theoretical Computer Science, vol. 10, pp. 21–36, 1993.

32. T. Hastie, R. Tibshirani, and M. Wainwright, Statistical Learning with Sparsity: The Lasso and Generalizations. Chapman & Hall/CRC, 2015.

33. A. Rinaldo, “Properties and refinements of the fused lasso,” Annals of Statistics, vol. 37, no. 5B, pp. 2922–2952, 2009.

34. P. Zhao and B. Yu, “On model selection consistency of lasso,” Journal of Machine Learning Research, vol. 7, pp. 2541–2563, Dec. 2006.

35. M. J. Wainwright, “Sharp thresholds for high-dimensional and noisy sparsity recovery using l1-constrained quadratic programming (lasso),” IEEE Transactions on Information Theory, vol. 55, no. 5, pp. 2183–2202, May 2009. [Online]. Available: http://dx.doi.org/10.1109/TIT.2009.2016018

36. J. Sharpnack, A. Singh, and A. Rinaldo, “Sparsistency of the edge lasso over graphs,” in Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, AISTATS 2012, La Palma, Canary Islands, April 21-23, 2012, 2012, pp. 1028–1036.

37. D. Zhou, J. Huang, and B. Scholkopf, “Learning with hypergraphs: Clustering, classification, and embedding,” in NIPS, B. Scholkopf, J. Platt, and T. Hoffman, Eds. MIT Press, 2006, pp. 1601–1608.

38. L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees. Wadsworth, 1984.

39. M. Lichman, “UCI machine learning repository,” 2013. [Online]. Available: http://archive.ics.uci.edu/ml

参考文献をもっと見る