From machine Learning to native GPU code generation all in one language

Abstract: Julia is a modern language for mathematical computing. It provides the ease of Python, with the speed of C. This combination has excited developers from academia to industry, and created a large and thriving ecosystem of hundreds of thousands of users, thousands of contributors, and over 1,600 packages.

This workshop will introduce the audience to the power and beauty of the Julia programming language. We will introduce the language and its basic syntax, efficient development setup, the type system and multiple dispatch. We will see how to analyse the performance of Julia programs, and enable the maximum possible performance from the hardware. The largest Julia application till date was run on 650,000 cores with 1.3M threads on 178 TB of data for an astronomy application - Celeste. Celeste uses machine learning to automatically catalog over 188M light sources on Cori - the world’s 5th largest supercomputer.

After a quick introduction, this workshop will focus on native code generation on GPUs from Julia. Using this foundation, we will demonstrate how a novel machine learning ecosystem is falling into place - one that addresses the two language problem of writing machine learning systems in Python and C/CUDA. Programming across multiple languages splinters teams and communities, whereas having a common language for productivity as well as performance leads to new powerful possibilities. Attendees will learn how to program at the highest mathematical level of algorithms all the way down to the lowest levels of GPU programming for performance all in Julia.

The audience are expected to have basic understanding of mathematics and have some programming experience in any language.

Bio: Ranjan Anantharaman is a data scientist at Julia Computing, where he works on writing numerical software across a variety of domains. His interests span across numerical linear algebra, machine learning and high performance computing.