Abstract: Machine learning has been highly successful in data-intensive applications, but is often hampered when the data set is small. Few-Shot Learning (FSL) was proposed to tackle this problem. It is a meta-learning technique, used across different fields of Computer vision, NLP etc. It has gained popularity because it helps in making predictions using a limited number of examples with supervised information, that is with few training samples. The goal of few-shot learning is not to let the model recognize the images in the training set and then generalize to the test set. Instead, the goal is to learn, that is “Learn to learn”. There are different types of networks within few-shot learning - siamese networks, prototypical networks, etc. This session will cover the fundamentals of few-shot learning, covering some of the popular networks and loss functions used with the network, and application of few-shot learning techniques in real-world use cases. The session is targeted towards an audience having at least intermediate knowledge of neural networks and machine learning. The session will also go through the code - implementation of Siamese Neural Networks with Triplet Loss on Imagery data. At the end of the workshop, attendees will have a decent idea of few-shot learning techniques and its applications, and will gain a deeper understanding of deep learning overall.
Bio: Isha is a principal data scientist at Capital One, working in conversational AI space. Prior to that, I worked at Ericsson as a data scientist in the computer vision team. I completed my master's from New York University from an Urban Data Science program in 2018. I have worked in different NYU research labs (NYU Urban Observatory, NYU Sounds of New York City (SONYC) lab). Before moving to New York, I lived in Hong Kong for 5 years, where I did my bachelors from Hong Kong University of Science & Tech (HKUST) in Environmental Technology and Computer Science and later worked in HKUST- Deutsche Telecom Systems and Media lab (an Augmented Reality and Computer Vision focused lab) as a Research Assistant.