Our Book on probabilistic deep learning is finally out as an early access version (2 chapters published 5 more to come). In a nutshell, it’s deep learning as curve-fitting.
The first chapters explain “standard” deep learning using the maximum likelihood principle. Later chapters will also include a Bayesian approach to deal with uncertainty. Go and check it out! :-)
https://www.manning.com/books/probabilistic-deep-learning-with-python You can find the Jupyter notebooks here
A selection of recent research projects.
Recently deep neural networks have become standard in many areas of analysing perception data like images or audio. However, they can be spectacular wrong. E.g. consider the case, where a network should classify an image of a class it never saw before. Technically speaking the image class is not in training set. This is shown in the following picture, where a network trained on dogs is shown a cat.
The network overconfidently assigns a high probability to the wrong class. I get scared when I think of self driving cars being overconfident. While standard networks cannot state their uncertainty of a prediction, novel methods allow to include uncertainty information. One way to do so is to use dropout also during test time. Yarin Gal has shown that this allows to quantify uncertainty in his Phd thesis.
We use this approach in several places:
Paper 2018 for the use in high content screening: “Know When You Don’t Know: A Robust Deep Learning Approach in the Presence of Unknown Phenotypes” ASSAY and Drug Development Technologies.
Poster 2018 (Swiss Data Science Conference, Laussane) “Are you serious”
And in other ongoing projects, stay tuned.
In high content phenotypic screening millions of cells need to be classified based on their morphology. Images of the cells are taken at different 5 ‘color channels’ using different fluorescent markers.
In this project we compared the conventional pipeline (upper branch) against a deep learning approach (lower branch). For the deep learning approach the construction of appropriate feature definitions is part of the training. Whereas, in the traditional pipeline expert knowledge is required for the tedious creation of handcrafted features. Compared to the best traditional method, the misclassification rate in the deep learning approach is reduced from 8.9% to 6.6%.
For more information: See our Poster and talk at the SIBS 2015 in Basel or have a look at out paper: Dürr, O., and Sick, B. Single-cell phenotype classification using deep convolutional neural networks. Journal of Biomolecular Screening (2016).
The pi-Vision project was my first research project applying deep learning. The task was to use a raspberry pi minicomputer for face recognition. We used among other approaches a convolution neural network, a recently developed technique revolutionising image recognition. The following video (please turn off the sound!) shows the trained network doing predictions.
We present a new layout algorithm for complex networks that combines a multi-scale approach for community detection with a standard force-directed design. Since community detection is computationally cheap, we can exploit the multi-scale approach to generate network configurations with close-to-minimal energy very fast. As a further asset, we can use the knowledge of the community structure to facilitate the interpretation of large networks, for example the network defined by protein-protein interactions.
Below you find a video showing the algorithm at work, disentangling the network of streets in the UK, having 4824 vertices and 6827 nodes.