
Abstract: Autoencoders are a special kind of neural network architecture. They are trained by reproducing an input as accurately as possible in an unsupervised way. This is done by encoding the input into a latent representation that forces the network to learn some kind of a abstraction and then reconstructing the original input from that representation.
The benefit of this is hard to see at first. But we can make use of this approach in at least two ways. First, we can take the latent representation that should now contain the abstract pattern of the inputs. This can be used for dimensionality reduction, clustering, or visualization. Second, we can use the reconstruction error as a measure of how well something fits the learned concept. This is used to find outliers even for the most complex input types.
In this workshop we will illustrate both approaches using a consistent single example. We will use TensorFlow in a Colab notebooks, so all you need is a recent version of Chrome and a Google login. You will not need prior knowledge with TensorFlow, but need a good understanding of how training neural networks work as a prerequisite.
Background Knowledge:
Attendees will not need prior knowledge with TensorFlow or the other tools mentioned, but need a good understanding of how training neural networks work as a prerequisite.
Bio: Oliver Zeigermann has been developing software with different approaches and programming languages for more than 3 decades. In the past decade, he has been focusing on Machine Learning and its interactions with humans.

Oliver Zeigermann
Title
Blue Collar ML Architect | Freelancer
