Interacting with Deep Generative Models for Content Creation
Interacting with Deep Generative Models for Content Creation


Deep generative models have made great progress to synthesize data realistically in various domains such as image generation and speech synthesis. Generative modeling brings a new paradigm shift in AI from content classification and regression to content analysis and creation.

This tutorial aims at introducing the basics of deep generative models such as Variational AutoEncoder (VAEs) and Generative Adversarial Networks (GANs) as well as the user interaction with the models for content manipulation and creation. By completing this workshop, you will have an understanding of the deep generative models, their strengths and weaknesses, and their promising applications in content analytics and creation. The workshop will focus on the generative models used in image synthesis but the introduced methodology is able to be extended in different domains.

Session Outline
Lecture 1: Introduction of Deep Generative Models
The basics of the deep generative models and their development.

Lecture 2: Content Creation using Deep Generative Models
What are the internal representations of the deep generative models and how to interact with the models for content analysis and creation.

Background Knowledge
Some machine learning experiences


Bolei Zhou is an Assistant Professor with the Information Engineering Department at the Chinese University of Hong Kong. He received his PhD in computer science at the Massachusetts Institute of Technology. His research is on machine perception and decision, with a focus on visual scene understanding and interpretable AI systems. He received the MIT Tech Review’s Innovators under 35 in Asia-Pacific award, Facebook Fellowship, Microsoft Research Asia Fellowship, MIT Greater China Fellowship, and his research was featured in media outlets such as TechCrunch, Quartz, and MIT News. More about his research is at