[BibTeX] [RIS]
Learning Gradient Boosted Multi-label Classification Rules
Type of publication: Inproceedings
Citation: rapp20boomer
Booktitle: Proceedings of the European Conference of Machine Learning (ECML2020)
Year: 2020
Publisher: Springer
Note: To appear
URL: https://arxiv.org/abs/2006.13346
Abstract: In multi-label classification, where the evaluation of predic-tions is less straightforward than in single-label classification, variousmeaningful, though different, loss functions have been proposed. Ideally,the learning algorithm should be customizable towards a specific choiceof the performance measure. Modern implementations of boosting, mostprominently gradient boosted decision trees, appear to be appealing fromthis point of view. However, they are mostly limited to single-label clas-sification, and hence not amenable to multi-label losses unless these arelabel-wise decomposable. In this work, we develop a generalization of thegradient boosting framework to multi-output problems and propose analgorithm for learning multi-label classification rules that is able to min-imize decomposable as well as non-decomposable loss functions. Usingthe well-known Hamming loss and subset 0/1 loss as representatives, weanalyze the abilities and limitations of our approach on synthetic dataand evaluate its predictive performance on multi-label benchmarks.
Keywords: Gradient boosting, multilabel classification, Rule Learning
Authors Rapp, Michael
Loza Mencía, Eneldo
Fürnkranz, Johannes
Nguyen, Vu-Linh
Hüllermeier, Eyke
Topics