Abstract: This tutorial workshop will cover both statistical and neural network-based models for time series analysis. It will be introductory in nature and focus on the discussion of a couple of workhorse statistical and neural network-based time series models that are frequently applied to solving time series forecasting problems.
Specifically, I will sketch the family Autoregressive Integrated Moving Average (ARIMA) models (with and without seasonal components), the class of Vector Autoregressive (VAR) Models, and Long-Short Term Memory (LSTM) Network, including a discussion of the advantages and disadvantages when using each of these models in time series forecasting scenarios. I will use some real-world time series to illustrate the application of these techniques in Python.
Forecasting is both a fascinating subject to study and an important technique applied in industry, government and academic settings. Example applications include demand and inventory planning, marketing strategy planning, capital budgeting, pricing, machine predictive maintenance, macroeconomic forecasting, and supply chain forecasting.
Forecasting typically requires time series data, and time series data is ubiquitous nowadays, both within and outside of the data science field: weekly initial unemployment claims, tick-level stock prices, weekly company sales, daily number of steps taken recorded by a wearable, machine performance measurements recorded by sensors, key performance indicators of business functions, just to name a few.
However, time series data differs from cross-sectional data in that time series data has temporal dependence, and this dependence can be leveraged to forecast future values of the series. Some of the most important and commonly used data science techniques to analyze time series data and make forecasts are those developed in the field of statistics and machine learning. For this reason, time series statistical and machine learning models should be included in any data scientists’ toolkit.
This presentation is suitable for anyone who is not familiar with statistical and neural-network-based time series modeling and wants to learn the basics of time series analysis, modeling, and forecasting. It may include data scientists, data engineers, and data science/engineer VP/Director/Manager who has not been trained in statistics, econometrics, and machine learning and have not had much exposure to time series analysis and forecasting.
Bio: Jeffrey is currently the Global Head of Data Science and Analytics at Amazon Music. Prior to Amazon, Jeffrey worked at WalmartLabs as the VP of Data Science, Data Engineering, and Platform Engineering. Before joining WalmartLabs, he pretty much spent my entire career in quantitative finance. His last role in the investment management industry was the Chief Data Scientist and Global Head of Data Science at AllianceBernstein (AB), a global investment management firm that managed almost $800B. Before AB, he was the VP and Head of Data Science at Silicon Valley Data Science, a startup acquired by Apple in 2017. Earlier in his career, he held various quantitative leadership positions, including the Corporate VP and Head of Risk Analytics and Quantitative Research at Charles Schwab Corporation, Director of Financial Risk Consulting at KPMG, and Assistant Director at Moody’s Analytics. Jeffrey enjoys academic research and teaching. He has taught finance, economics, machine learning, and statistics at University of Pennsylvania, Virginia Tech, Cornell, NYU, and UC Berkeley. He is a frequent speaker at national and international A.I., data science, and technology conferences, such as Spark&AI Summit, Strata, ODSC, PyCon, and many others. He holds a Ph.D. and an M.A. in Economics from the University of Pennsylvania and a B.S. in Mathematics and Economics from UCLA.