Deep Learning on Mobile
Deep Learning on Mobile

Abstract: 

Over the last few years, convolutional neural networks (CNN) have risen in popularity, especially in the area of computer vision. Many mobile applications running on smartphones and wearable devices would potentially benefit from the new opportunities enabled by deep learning techniques. However, CNNs are by nature computationally and memory intensive, making them challenging to deploy on a mobile device.

This workshop explains how to practically bring the power of convolutional neural networks and deep learning to memory and power-constrained devices like smartphones. You will learn various strategies to circumvent obstacles and build mobile-friendly shallow CNN architectures that significantly reduce the memory footprint and therefore make them easier to store on a smartphone; The workshop also dives into how to use a family of model compression techniques to prune the network size for live image processing, enabling you to build a CNN version optimized for inference on mobile devices. Along the way, you will learn practical strategies to preprocess your data in a manner that makes the models more efficient in the real world.

Following a step by step example of building an iOS deep learning app, we will discuss tips and tricks, speed and accuracy trade-offs, and benchmarks on different hardware to demonstrate how to get started developing your own deep learning application suitable for deployment on storage- and power-constrained mobile devices. You can also apply similar techniques to make deep neural nets more efficient when deploying in a regular cloud-based production environment, thus reducing the number of GPUs required and optimizing on cost.
.

Bio: 

Anirudh is the Head of AI & Research at Aira (Visual interpreter for the blind), and was previously at Microsoft AI & Research where he founded Seeing AI - Talking camera app for the blind community. He is also the co-author of the upcoming book, ‘Practical Deep Learning for Cloud and Mobile’. He brings over a decade of production-oriented Applied Research experience on Peta Byte scale datasets, with features shipped to about a billion people. He has been prototyping ideas using computer vision and deep learning techniques for Augmented Reality, Speech, Productivity as well as Accessibility. Some of his recent work, which IEEE has called ‘life changing’, has been honored by CES, FCC, Cannes Lions, American Council of the Blind, showcased at events by White House, House of Lords, World Economic Forum, on Netflix, National Geographic, and applauded by world leaders including Justin Trudeau and Theresa May.

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google