Ml Inference on Mobile Device With Onnx Runtime
Ml Inference on Mobile Device With Onnx Runtime

Abstract: 

AI model inference on the phone is important to deliver real-time experience that requires execution locally on the device. But there are many constraints to build a scalable solution addressing all the different hardware specs and platform requirements across Android and iOS. Devices have limited storage and memory, and the platform app stores impose restrictions on the size of the app package.
ONNX Runtime Mobile is a new feature to address the needs for developers to build solutions for Mobile devices. You can build a reduced size binary package to integrate with the phone application and inference your ONNX models locally on devices.
ONNX Quantization techniques can be used to reduce the model size by converting FP32 weights to INT8. Improved performance is achieved with INT8 execution on ARM processors on the mobile phone.

Bio: 

Guoyu Wang is a senior software engineer at Microsoft, where he works on Onnx Runtime to bring accelerated inferencing to mobile platforms. In the past, he has also worked on Microsoft Office and shipped multiple versions of Microsoft PowerPoint. Guoyu earned his master’s degree in Computer Science from University of Toronto.

Open Data Science

 

 

 

Open Data Science
One Broadway
Cambridge, MA 02142
info@odsc.com

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google