Code and Data for Graded Multilabel Classification
General Information and Citation
On this page you will find data and the code for the various approaches from
C. Brinker, E. Loza Mencía and J. Fürnkranz, "Graded Multilabel Classification by Pairwise Comparisons", in Proceedings of the 14th International Conference on Data Mining (ICDM-2014), 2014 [bibtex]
and the experiments described there. A longer version with more detailed and complete descriptions can be found in
C. Brinker, E. Loza Mencía and J. Fürnkranz, "Graded multilabel classification by pairwise comparisons", TU Darmstadt, Tech. Rep. TUD-KE-2014-01, 2014. [pdf] [bibtex]
The code uses an outdated version of the LPCforSOS framework. This uses Java 1.6 and Weka 3.7. The code was developed using the Eclipse IDE. Because of incompabillities between the different approaches in the usage of the framework, each of the approaches is developed in its own branch of the framework as separate eclipse project. Each approach is represented as a separate folder in the archive file GMLC_Code.zip:
NOTE: We obtained the code of the IBLR-ML algorithm directly from the authors, so you may obtain it from there too or use the version contained in MULAN.
The following datasets were used in the evaluation:
- BeLaE: This dataset consists of 1930 instances each representing a graduate student. Each instance has 50 attributes. Two attributes, age and sex, characterize the student, the remaining 48 attributes represent the actual questions to the students, which were on the importance of certain properties of their future jobs. Each of these answers has a grade from `1' (completely unimportant) to `5' (very important). In view of the lack of a more comprehensive and informative characterization of the students, Cheng et al. decided to use a subset of the question answers as additional attributes for characterizing the students. Following the same setup, we generated 50 datasets by choosing randomly a subset of n questions as target labels. The remaining 50-n attributes were used as features of the instances. We generated two kinds of datasets, for n=5 and n=10, respectively.
- movies: We collected a dataset from the German TV program guide TVSpielfilm.de which rates movies by assigning grades to the categories `fun', `action', `sex', `suspense' and `sophistication' rather than giving an overall rating. Each category has grades from `0' to `3'. In total, we had data for 1967 movies. For characterizing them, we extracted the associated summary texts from the IMDB. Furthermore, we added the English title, the year, director's name, actors' names, characters' names, writers' names, runtime, country of origin, and language as text to the summary.The text was tokenized, stemmed with the Porter algorithm and common English stopwords were filtered. We computed then the TF-IDF values of the tokens on the respective training data of the 10-fold cross validation.
- medical: The medical dataset consists of 1953 free text radiology reports. They were collected for the CMC's 2007 Medical Natural Language Center (homepage not online anymore) and three expert companies were asked to annotate them with a set of ICD-9-CM disease/diagnosis classification codes. In the original dataset for the multilabel classification competition, a document was assigned to a code if there was a consensus among at least two of the annotators on a specific code. In contrast, we generated a GMLC dataset by considering the level of agreement as grade of assignment. The texts were processed as for the movies dataset but we used the absolute term frequency in contrast to TF-IDF.
The following four different kinds of experiments were performed:
- 10-fold cross validation on each of the 50 generated datasets from BeLaE with 5 labels (BeLaE_m5.zip)
- 10-fold cross validation on each of the 50 generated datasets from BeLaE with 10 labels (BeLaE_m10.zip)
- 10-fold cross validation on the medical dataset (medical.zip)
- 10-fold cross validation on the movies dataset (movies.zip, TF-IDF was computed separately on the training data of each split)
For each of the experiments there exists a separate main-class in each of the projects:
They can be found in each project under "<project-folder>/src/main/java/LPCforSOS/evaluation/".
Each of the main-methods in the classes consumes three arguments:
- Prediction method: 'v' - voting, 'w' - weighted voting, 't' - voting with weighted voting as tie breaking strategy
- path of the directory containing the dataset(s)
- path and name of the output file for the results (results will be appended)
NOTE: Weighted voting and tie breaking methods are actually not implemented for all of the approaches. In case of using them although not implemented the programm will terminate immediately.
NOTE: Some of the experiments will need their time. The code is not optimized for short runtimes and the experiments can take several hours. If you want to speed up please remark that the slowest part is the learning of the several base classifiers. This can be easily parallelized using threads and/or clusters.