How to use Satellite Imagery to be a Machine Learning Mantis Shrimp

Abstract: Humans have 3 color sensing cones allowing us to see the rainbow, mantis shrimp[1] have 16 cones allowing them to see the world in ways we can only imagine. Now imagine you wanted to have mantis shrimp vision over all of a town, country, or continent – well today is your lucky day! In this session we are going to start by showing you how satellite imagery actually allows you to “see” in more bands of color than the mantis (how about 26 bands) – each band is a massive amount of data about the earth. Then we will show you how you can work with this data in Jupyter notebooks to extract all sorts of information about the world. Finally, we will wrap up with how to make ML models using this data, extract features we care about, and then run it through a cloud-based processing model. You will leave knowing a lot more about satellites, data extraction from images, and spatially based ML to understand your world. Expect few slides and A LOT of code/demos!


Bio: Steve is the Developer Relations lead for DIgitalGlobe. He goes around and shows off all the great work the DigitalGlobe engineers do. He can teach you about Data Analysis with Java, Python, PostgreSQL MongoDB, and some JavaScript. He has deep subject area expertise in GIS/Spatial, Statistics, and Ecology. He has spoken at over 50 conferences and done over 30 workshops including Monktoberfest, MongoNY, JavaOne, FOSS4G, CTIA, AjaxWorld, GeoWeb, Where2.0, and OSCON. Before DigitalGlobe, Steve was a developer evangelist for Red Hat, LinkedIn, deCarta, and ESRI. Steve has a Ph.D. in Ecology from University of Connecticut. He likes building interesting applications and helping developers and data scientists do more with spatial data

Open Data Science Conference