Seminar aus Maschinellem Lernen und Data Mining

Domain-knowledge for Deep Learning

The seminar is available in TUCaN under module number 20-00-0102.

When and Where?

The kick-off meeting will take place on Tuesday, 23.04.2019 at 17:10-18:50 in room S1|10 211 (access via S1|11, Alexanderstraße 4).

Do not miss the kick-off meeting if you want to participate in the seminar!

All futher meetings (30.04.-16.07) will take pace on Tuesdays at 17:10 - 18:50 in room S2|02 E202 (Piloty building).


In this semester's machine learning and data mining seminar, we discuss papers related to Domain-knowledge for Deep Learning.

In case you want to recap basics of deep learning, here are some resources:


The language used in the seminar will be English.

The topics for the talks will be assigned in the kick-off meeting. 

It is not necessary to have prior knowledge, but prior knowledge in data mining and machine learning will be helpful. Participation is limited to 20 students. In case we have more students, students with prior knowledge in data mining and knowledge discovery will be preferred. The selection will be made at kick-off meeting. If there are more qualified people than topics, we will use random selections.

The students are expected to give a 20 minute talk on the material they are assigned, followed by feedback, questions, and discussions. Although each topic is typically associated with a single paper, the point of the talk is not to exactly reproduce the entire contents of the paper, but to communicate the key ideas of the methods that are introduced in the paper. Thus, the content of the talk should exceed the scope of the paper, and demonstrate that a thorough understanding of the material was achieved. See also our general advices on giving talks.

For further questions feel free to send an email to No prior registration is needed, however, please still send us an email so that we are able to estimate beforehand the number of participants, and have your E-mail address for possible announcements. Also make sure that you are registered in TUCaN.


The talks are expected to be accompanied by slides. The students will have to send the slides one week in advance to the talk to We will use this opportunity to provide early feedback on common problems such as too many slides, too much text on the slides, small font sizes, etc. The talk and the slides should be in English.

There will be two talks in each meeting. As mentioned above, each topic is associated with one paper, but the talk should not exactly reproduce the content of the paper, but communicate the key ideas of the introduced method.

All papers should be freely available on the internet or in the ULB. Note that some paper sources such as Springer link often only works on campus networks (sometimes not even via VPN). If you cannot find a paper, contact us.


The slides, the presentation and the question and answers section of the talk will influence the overall grade. Furthermore, it is expected that students actively participate in the discussions, and this will also be part of the final grade.

To achieve a grade in the 1.x range, the talk needs to exceed the contentual recitation of the given material and include own ideas, own experience or even demos. An exact recitation of the papers will lead to a grade in the 2.x range. A weak presentation and lack of engagement in the discussions may lead to a grade in the 3.x range, or worse. Please read also very carefully our guidelines for giving a talk.

In addition to the grading, we will also give public feedback on the talks immediately after the talks, and we are considering a best presentation award at the end of the seminar.


Here is a list of topics, each topic consists of two seminar talks (indicated by the bullet list). For each seminar talk, we give 1-3 papers as a starting point. However, note that you are not supposed to reproduce the papers in all details. For most talks, you should explain the method that is introduced in the paper(s), and show where and how it can be used. Often you will find much better examples or use cases in later publications on these methods. See also our guidelines for giving a talk.

Limitations of Deep Learning (14.5.)

  • Malte L.
    Sun, Chen, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta, ‘Revisiting Unreasonable Effectiveness of Data in Deep Learning Era’, in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 843–852 <>. Slides.
  • Lars E.
    Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals, ‘Understanding Deep Learning Requires Rethinking Generalization’, 2016 <>. Slides.

Posterior Regularization (21.5.)

  • NN
    Ganchev, Kuzman, João Graça, Jennifer Gillenwater, and Ben Taskar, ‘Posterior Regularization for Structured Latent Variable Models’, Journal of Machine Learning Research, 11 (2010), 2002–49
  • Patricia H.
    Hu, Zhiting, Xuezhe Ma, Zhengzhong Liu, Eduard Hovy, and Eric Xing, ‘Harnessing Deep Neural Networks with Logic Rules’, in ArXiv, 2016 <>. Slides.

Knowledge Distillation (28.5.)

  • Nicolas D.
    Buciluǎ, Cristian, Rich Caruana, and Alexandru Niculescu-Mizil, ‘Model Compression’, in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, pp. 535–41 <>
  • Alexander M.
    Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean, ‘Distilling the Knowledge in a Neural Network’, in ArXiv, 2015, pp. 1–9 <>

Deep Learning and Decision Trees (4.6.)

  • Leon K. (-> 9.7.)
    Mike Wu, Michael C Hughes, Sonali Parbhoo, Maurizio Zazzi, Volker Roth, and Finale Doshi-Velez. 2017. Beyond Sparsity: Tree Regularization of Deep Models for Interpretability. arXiv preprint arXiv:1711.06178 (2017).
  • Lukas W.
    Yongxin Yang, Irene Garcia Morillo, Timothy M. Hospedales: Deep Neural Decision Trees. CoRR abs/1806.06988 (2018). Slides.

Logic-based Neural Networks (11.6.)

  • Benedikt N.
    Towell, Geoffrey G., and Jude W. Shavlik, ‘Knowledge-Based Artificial Neural Networks’, Artificial Intelligence, 70 (1994), 119–65 <>
  • Franz K.
    Son N.​ Tran; Artur S.​ d'Avila Garcez: Deep Logic Net­works: In­sert­ing and Ex­tract­ing Knowl­edge From Deep Be­lief Net­works.  IEEE Trans­ac­tions on Neu­ral Net­works and Learn­ing Sys­tems, 2016.

Differentiable Logic (18.6.)

  • Thomas L.
    William W. Cohen, Fan Yang, Kathryn Mazaitis: TensorLog: Deep Learning Meets Probabilistic DBs. CoRR abs/1707.05390 (2017)
  • Felix O.
    Richard Evans, Edward Grefenstette: Learning Explanatory Rules from Noisy Data. J. Artif. Intell. Res. 61: 1-64 (2018)

Graph Convolutional Neural Networks (25.6.)

Knowledge Graphs (2.7.)

  • Alexander K.
    Annervaz, K M, Somnath Basu Roy Chowdhury, and Ambedkar Dukkipati, ‘Learning beyond Datasets: Knowledge Graph Augmented Neural Networks for Natural Language Processing’, in NAACL, 2018, pp. 313–22 <>, Slides
  • Lennard R.
    Marino, Kenneth, Ruslan Salakhutdinov, and Abhinav Gupta, ‘The More You Know: Using Knowledge Graphs for Image Classification’, in Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 20–28 <>, Slides

Neural Networks with Memory (9.7.)

Sum-Product Networks (16.7.)

  • Idris N.
    Hoifung Poon, Pedro M. Domingos: Sum-Product Networks: A New Deep Architecture. UAI 2011: 337-346
  • Maximilian O.
    Robert Gens, Pedro M. Domingos: Discriminative Learning of Sum-Product Networks. NIPS 2012: 3248-3256



small ke-icon

Knowledge Engineering Group

Fachbereich Informatik
TU Darmstadt

S2|02 D203
Hochschulstrasse 10

D-64289 Darmstadt

Telefon-Symbol+49 6151 16-21811
Fax-Symbol +49 6151 16-21812

A A A | Drucken | Impressum | Sitemap | Suche | Mobile Version
zum Seitenanfangzum Seitenanfang