Working group on Deep Learning
Contact persons:Purpose and Topics
The goal of this working group is to enhance expertise of WIAS in the emerging field of deep learning. The main focus is intended to be on theoretical understanding of universal efficiency of deep learning approach for different classes of practical problems.
List of talks
09.06.2016, 16:00 | HVP 11A, room 4.13 | Valery Avanesov | Deep learning: introduction |
30.06.2016, 16:00 | MS 39, room 406 | Nazar Buzun | Deep Feedforward Networks |
07.07.2016, 16:00 | MS 39, room 406 | Egor Klochkov | Regularization for Deep Learning |
14.07.2016, 16:00 | MS 39, room 406 | Pavel Dvurechensky | Optimization for Training Deep Models |
18.08.2016, 16:00 | MS 39, room 406 | Pavel Dvurechensky | Convolutional Networks |
25.08.2016, 16:00 | MS 39, room 406 | Andzhey Koziuk | Sequence Modeling: Recurrent and Recursive Nets |
01.09.2016, 16:00 | MS 39, room 406 | Nazar Buzun | Linear Factor Models |
08.09.2016, 16:00 | MS 39, room 406 | Alexandra Suvorikova | Autoencoders |
15.09.2016, 16:00 | MS 39, room 406 | Cancelled | Cancelled |
22.09.2016, 16:00 | MS 39, room 406 | Andzhey Koziuk | Representation Learning |
29.09.2016, 16:00 | MS 39, room 406 | No seminar | No seminar |
06.10.2016, 16:00 | MS 39, room 406 | Nazar Buzun | Bellman's principle for optimal control problems |
10.10.2016, 16:00 | MS 39, room 406 | Prof. Vladimir Spokoiny | Dimension reduction |
17.10.2016, 16:00 | MS 39, room 406 | Prof. Vladimir Spokoiny | Dimension Reduction |
24.10.2016, 16:00 | MS 39, room 406 | Prof. Vladimir Spokoiny | Dimension Reduction |
31.10.2016, 16:00 | MS 39, room 406 | Egor Klochkov | Non-negative Matrix Factorization |
07.11.2016, 16:00 | MS 39, room 406 | No seminar | No seminar |
14.11.2016, 16:00 | HVP 11A, room 4.01 | Prof. Vladimir Spokoiny | Dimension Reduction |
21.11.2016, 16:00 | MS 39, room 406 | Cancelled | Cancelled |
28.11.2016, 16:00 | HVP 11A, room 4.01 | Andzhey Koziuk | Instrumental Variables |
05.12.2016, 16:00 | MS 39, room 406 | Christian Kröning | Semi-Supervised Learning |
09.01.2017, 16:00 | MS 39, room 406 | Prof. Reinhold Schneider | Hierarchical tensor representations and deep (convolutional) networks |
16.01.2017, 16:00 | MS 39, room 406 | Alexandra Carpentier | Non-linear Scattering |
23.01.2017, 16:00 | MS 39, room 406 | Martin Eigel | Tensor Representations |
30.01.2017, 16:00 | MS 39, room 406 | Christian Kröning | Semi-Supervised Learning (contd.) |
20.02.2017, 16:00 | MS 39, room 406 | No seminar | No seminar |
27.02.2017, 16:00 | MS 39, room 406 | No seminar | No seminar |
13.03.2017, 16:00 | MS 39, room 406 | John Schoenmakers | Overview on regression methods for optimal stopping and control |
21.04.2017, 15:00 | MS 39, room 406 | Andzhey Koziuk | Convolutional Sparse Coding |
28.04.2017, 15:00 | MS 39, room 406 | Nazar Buzun | Long Short Term Memory networks and their application for social network users classification |
05.05.2017, 15:00 | MS 39, room 406 | Alexandra Suvorikova | Wasserstein Training of Restricted Boltzmann Machines |
12.05.2017, 15:00 | MS 39, room 406 | Vladimir Spokoiny and Larisa Adamyan | Adaptive Weight Clustering |
19.05.2017, 15:00 | MS 39, room 406 | No seminar | No seminar |
26.05.2017, 15:00 | MS 39, room 406 | Pavel Gurevich | Certainty quantification in neural networks with regression |
02.06.2017, 15:00 | MS 39, room 406 | John Schoenmakers | Deep Learning for stochastical optimal stopping and control |
Reading list
- Ian Goodfellow, Yoshua Bengio and Aaron Courville
Deep Learning
2016, Book in preparation for MIT Press, http://www.deeplearningbook.org - Jurgen Schmidhuber
Deep Learning in Neural Networks: An Overview
2014, ArXiV:1404.7828 - Most cited deep learning papers
https://github.com/terryum/awesome-deep-learning-papers - Reading list from deeplearning.net
http://deeplearning.net/reading-list/
Software and Tutorials
- Deep Learning Tutorials The tutorials presented here will introduce you to some of the most important deep learning algorithms and will also show you how to run them using Theano. Theano is a python library that makes writing deep learning models easy, and gives the option of training them on a GPU.
- Theano: A Python framework for fast computation of mathematical expressions
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. - Keras: Deep Learning library for Theano and TensorFlow
Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. - TensorFlow (google)
TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well. - TensorFlow Playground
One can try here to construct a Neural Network without any programming, just using a graphical interface.