Schedule Guide for Pass Holders
The Accelerate AI schedule is for Tuesday October 29th and Wednesday October 30th. It is available to Accelerate AI 2 Day & 4 Day pass holders and ODSC Platinum Business, and VIP pass holders
The ODSC Talks/Workshops schedule includes Thursday October 31st and Friday November 1st. It is available to Accelerate AI 4 Day Pass holders plus ODSC Silver, Gold, Platinum, Platinum Business, and VIP Pass holders
Speaker and speaker schedule times are subject to change. More sessions added weekly.
Business Talk | AI Expertise
Expectations of AI are high, but only a few understand what it is or what it takes to deliver. The biggest barrier to AI success is not math or technology, but getting stakeholders to understand the process of an AI project and tying the outcome to the business value. In this session, we will introduce the AI project process diagram, discuss what each step means to a business leader and a data scientist, and how we align both parties to drive business results. We will also discuss different methods – i.e. centralized or decentralized AI expertise – in building an AI team within an organization…more details
Pedro Alves is the founder and CEO of Ople Inc., a company that uses artificial intelligence to build artificial intelligence (AI). Pedro loves data science and has spent the past seventeen years working in the area of artificial intelligence – spanning predicting, analyzing and visualizing data across social media content, photos, genomics, insurance fraud/costs, social graphs, human attraction, spam detection, and topic modeling to name a few. Realizing that he was learning by observing how algorithms learn from processing different models, Alves discovered that data scientists could benefit from AI that mimics this behavior of ‘learning to learn to learn.’ Thus, Ople was founded with the vision of advancing data science to make it accessible to domain experts in the business.
Alves studied Ph.D in Computational Biology from Yale University and is an active speaker and advisor within the AI community.
Michael I. Jordan is the Pehong Chen Distinguished Professor in the Department of Electrical Engineering and Computer Science and the Department of Statistics at the University of California, Berkeley. His research interests bridge the computational, statistical, cognitive, and biological sciences; in recent years, he has focused on Bayesian nonparametric analysis, probabilistic graphical models, spectral methods, kernel machines, and applications to problems in distributed computing systems, natural language processing, signal processing, and statistical genetics. Previously, he was a professor at MIT. Michael is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences and a fellow of the American Association for the Advancement of Science, the AAAI, ACM, ASA, CSS, IEEE, IMS, ISBA, and SIAM. He has been named a Neyman Lecturer and a Medallion Lecturer by the Institute of Mathematical Statistics. He received the David E. Rumelhart Prize in 2015 and the ACM/AAAI Allen Newell Award in 2009. Michael holds a master’s degree in mathematics from Arizona State University and a PhD in cognitive science from the University of California, San Diego.
As Chief Decision Scientist at Google Cloud, Cassie Kozyrkov advises leadership teams on decision process, AI strategy, and building data-driven organizations. She is the innovator behind bringing the practice of Decision Intelligence to Google, personally training over 15,000 Googlers. Prior to joining Google, Cassie worked as a data scientist and consultant. She holds degrees in mathematical statistics, economics, psychology, and neuroscience.
Artificial intelligence (AI) can transform products, customer experiences, and entire business models. But data architecture, deep learning, natural language processing, and so on are only part of the AI journey. Putting AI to work requires a holistic view, from the way you incorporate data science into your organization, to the approach you take to product and experience design, to the makeup of the teams who execute your strategy.
Entrusted with the rich financial data of 50 million customers, Intuit is in a unique position to take advantage of AI to help solve some of the biggest financial pain points for consumers and small businesses. Drawing upon real-world experiences at Intuit, Ashok Srivastava explains how to make your organization AI ready, determine the right AI applications for your business and products, and accelerate your AI efforts with speed and scale...more details
Ashok N. Srivastava, Ph.D. is the Senior Vice President and Chief Data Officer at Intuit. He is responsible for setting the vision and direction for large-scale machine learning and AI across the enterprise to help power prosperity across the world. He is hiring hundreds of people in machine learning, AI, and related areas at all levels.
Previously, he was Vice President of Big Data and Artificial Intelligence Systems and the Chief Data Scientist at Verizon. His global team focuses on building new revenue-generating products and services powered by big data and artificial intelligence. He is an Adjunct Professor at Stanford in the Electrical Engineering Department and is the Editor-in-Chief of the AIAA Journal of Aerospace Information Systems. Ashok is a Fellow of the IEEE, the American Association for the Advancement of Science (AAAS), and the American Institute of Aeronautics and Astronautics (AIAA).
Business Talk | AI Management
An ML model on a laptop is just a science project. To generate business value at scale, models need to feed applications, model pipelines, and reporting tools, but getting there isn’t easy. The path to production operationalization—and ROI—involves the automation of very specific deployment and management processes for which standard development tools are not designed.
This talk will touch on the unique challenges machine learning introduces to development organizations and detail the strategic decisions businesses must consider to create efficient processes that unlock real insights and maximize productivity of your data science and DevOps teams…more details
Diego Oppenheimer, founder and CEO of Algorithmia, is an entrepreneur and product developer with an extensive background in all things data. Prior to founding Algorithmia he designed, managed and shipped some of Microsoft’s most used data analysis products including Excel, Power Pivot, SQL Server and Power BI.
Diego holds a Bachelors degree in Information Systems and a Masters degree in Business Intelligence and Data Analytics from Carnegie Mellon University.
Business Talk | AI Innovation
Computer Vision is becoming the ultimate sensor. We present several applications where sensors from other domains are replaced with Computer Vision, reducing costs and increasing generalizability of the sensor. These deployments run on Matroid, detailing customized visual search and stream monitoring to a large number of users. Along the way, we explain how Matroid creates, trains, and visualizes CV models without programming, accessible to typical computer users who are not developers, allowing them to monitor video streams and visually search large collections of media. We conclude with some inspiring applications of CV in the medical domain, pushing the boundaries of medicine with cutting-edge Glaucoma detection using Computer Vision…more details
Reza Bosagh Zadeh is Founder CEO at Matroid and an Adjunct Professor at Stanford University. His work focuses on Machine Learning, Distributed Computing, and Discrete Applied Mathematics. Reza received his PhD in Computational Mathematics from Stanford under the supervision of Gunnar Carlsson. His awards include a KDD Best Paper Award and the Gene Golub Outstanding Thesis Award. He has served on the Technical Advisory Boards of Microsoft and Databricks, and has been working on Artificial Intelligence since 2005, starting at age 18 when he worked in Google’s AI research team.
As part of his research, Reza built the Machine Learning Algorithms behind Twitter’s who-to-follow system, the first product to use Machine Learning at Twitter. Reza is the initial creator of the Linear Algebra Package in Apache Spark. Through Apache Spark, Reza’s work has been incorporated into industrial and academic cluster computing environments. In addition to research, Reza designed and teaches two PhD-level classes at Stanford: Distributed Algorithms and Optimization (CME 323), and Discrete Mathematics and Algorithms (CME 305).
Business Talk | AI Management
Cisco is transforming to a customer lifecycle value-based business, and data is foundational to that transformation. Shanthi Iyer, Cisco’s Chief Data Officer, will talk about the five requirements for a data-driven business: Provide a single point of engagement for data needs for the business; deliver an integrated platform and foundational data capabilities, with a technology toolkit for analytics; deliver enterprise analytics for innovating and scaling critical business priorities; provide data governance, quality controls, standards and policies on data lifecycle management; and incubate analytics talent and an experienced community of practice…more details
Shanthi Iyer is Vice Chief Data Officer of the Data & Analytics group within Cisco’s Operations organization. Cisco’s Data & Analytics team drives the enterprise-wide data and analytics strategy, governance and policy
framework, prioritizing and accelerating critical data-related activities to enhance company decision-making and execution through fact-based insights and intelligence, data science and data-driven analytics. The team also provides data and insights as a service for Cisco.
Prior to her current role, Shanthi drove Enterprise Data, Security and Services, supporting business transformation by accelerating the next-generation analytics and technology platforms required by Cisco business. Previously, she led the Supply Chain Transformation program for Cisco’s $47 billion transactional fulfillment platform, implementing industry standard platforms and business processes and enabling new fulfillment models for Cisco.
A 23-year technology industry veteran, Shanthi started at Cisco in 1997 as an Oracle DBA. She has worked across Cisco, driving results and being disruptive in Infrastructure, Cisco Services, Commerce, Sales and Supply Chain. Prior to Cisco, Shanthi worked at Applied Materials in Santa Clara and MashreqBank in Dubai.
Shanthi has been recognized by the Silicon Valley YWCA Tribute to Women and received multiple leadership awards from the Stevie Awards for Women in Business. She also actively participates in India Connection and Women in Technology Action Network, which are Cisco organizations that are focused on career, leadership and personal mentoring, and networking.
Shanthi holds a B.S. in Math from the University of Madras and a diploma in Computer Science from the National Institute of Information Technology in Chennai. She also completed MIT’s Supply Chain Leadership Program andthe Executive Data Science Program from Northwestern University.
Business Talk | AI Expertise
The third and final shift in reinforcement learning has been making waves in the artificial intelligence research community and business enterprises. The earlier successes with DeepMind AlphaGo have revolutionized several industries such as healthcare, retail, manufacturing, IoT, Robotics, finance, industrial, geospatial platforms, recommendation systems, and text mining in building the real-world applications. Programming stacks such as TensorFlow, Python, and PyTorch deployed on production landscapes of many top-tier companies such as Google, OpenAI, DeepMind, Spotify, Quora, and Reddit with machine learning and reinforcement learning algorithms. Reinforcement learning and function approximations are built on the mathematical foundations based on the Markov decision processes (memoryless) with optimal state and Q-value functions that operate on the state and action pairs…more details
Business Talk | AI Innovation
This presentation will give a crash course in user-centric design, present arguments for why data scientists should care about design, and provide data scientists who build internal tools with design best practices. User-centric design, at its core, is a framework for understanding the user and his or her problems, and using those as a focal point for product design. Leading product companies spend huge resources to design their products well, but not all organizations with data scientists have designers to help them. At the same time, though, data scientists are increasingly building internal products for their organizations that embed data science into decision-making processes, so data scientists who want to see their tools get used should learn the best practices from user-centric design. This talk, co-presented by a data scientist and a designer, will start with design basics like building an understanding of the user, and then move on to more specific examples of user-centric data science products we’ve built…more details
Katie Malone is Director of Data Science at Civis Analytics, a data science software and services company. She leads a team of diverse data scientists who serve as technical and methodological advisors to the Civis consulting team, as well as writing the core machine learning and data science software that underpins the Civis Data Science Platform. Before working at Civis, she completed a PhD in physics at Stanford, working at CERN on Higgs boson searches. She was also the instructor of Udacity’s Introduction to Machine Learning course, and hosts Linear Digressions, a weekly podcast on data science and machine learning.
Annie Darmofal is Lead Product Designer at Civis Analytics, a data science software and services company. Heading up a team of user experience and visual designers, she leads user-centric product design for tools that serve both data scientists and business leaders. Her role across teams and personas puts her in a unique position to drive initiatives around solving big problems with systems thinking, including an overarching product vision and an organization-level design system. Prior to Civis, she was a designer at KnowledgeHound, a search-driven analytics platform that helps organizations tell stories with customer survey data.
Coming Soon!..more details
Peter Welinder is a Research Scientist at OpenAI, where he leads projects on learning-based robotics. His past projects include teaching robots to learn by imitating humans and autonomously manipulating objects with robotic hands. Previously, he was Head of Machine Learning at Dropbox, where he founded and managed applied machine learning and infrastructure teams. He founded a startup, Anchovi Labs, out of grad school which was acquired by Dropbox in 2012. Peter has a PhD in Computation and Neural Systems from Caltech and a degree in Physics from Imperial College London.
Business Talk | AI Expertise
As Machine Learning becomes a core component of any forward-looking company, how can we weave ML-driven functionality into the products and services we offer? This talk will explain the methodology we follow at Square to develop ML-driven customer-facing product features, which is based on paying close attention to four key and interdependent aspects: Design, Modeling, Engineering, and Analytics. Design is concerned about the usefulness and remarkability of the feature, and thus cares about the overall functionality, ease of use, and aesthetics of the experience. Modeling is concerned about the accuracy of the ML model, and thus cares about the training data, the features and performance of the model, and —crucially for a customer-facing product— how the application behaves in the face of the mistakes the model will inevitably make (false positives, false negatives, lack of predictions above a certain confidence)…more details
Marsal Gavalda is a senior R&D executive with deep expertise in speech, language, and machine learning technologies. Marsal currently leads the Commerce Platform Machine Learning team at Square, where he applies machine learning for economic empowerment and financial inclusion. Previously, Marsal headed the Machine Intelligence team at Yik Yak, where he developed natural language processing and machine learning services to analyze the content of messages, discover trends, and make recommendations at scale and across languages. Prior to that, Marsal served as the Director of Research at MindMeld (acquired by Cisco), where he applied the latest advances in speech recognition, language understanding, information retrieval, and machine learning to the MindMeld conversational and anticipatory computing platform. Marsal has also extensive experience in the customer interaction and speech analytics space, as he has served as VP and Chief of Research at Verint Systems and as VP of Research and Incubation at Nexidia (acquired by NICE), where he developed disruptive speech analytics solutions for the call center, intelligence, and media markets. Marsal holds a PhD in Language Technologies and a MS in Computational Linguistics, both from Carnegie Mellon University, and a BS in Computer Science from BarcelonaTech. Marsal is the author of over thirty technical and literary publications, thirteen issued patents, and is fluent in six languages. He is also a frequent speaker at academic and industry conferences and organizes, every summer, a science and humanities summit in Barcelona on topics as diverse as machine translation, music, or the neuroscience of free will.
Business Talk | AI Expertise
As enterprises strive to harness data science and AI from inward (i.e. automation to reduce operating expenses) and outward (i.e. creating new lines of business and routes to market which were previously not technically feasible) perspectives, the range of outcomes that have been achieved (or not achieved) across and within industries continues to be incredibly wide. A common foundation within successful enterprises is that they all have effective data enablement practices. In the same way that high achieving sales organizations are bolstered by competent sales enablement team(s), data science and AI practitioners are significantly more impactful when provided the same type of support. This is the role that data evangelism should and can fill- and in doing so, enable greater achievements by organizations’ AI and data science team(s). Through an approach as data-driven as data science itself, data evangelism has the potential to not only inspire individuals to leverage data, but to enable them to do so through a unique set of tools. This talk dives into this opportunity, presenting specific models and actionable insights…more details
Jennifer Redmon joined Cisco in 2009 and serves as its Chief Data Evangelist. Her organization enables an insight-driven culture through globally-scaled data products, services, and community enablement. In response to the shortage of data and analytical talent in the marketplace, her team has upskilled over 3,000 employees to date in the areas of data science, artificial intelligence, data storytelling and data engineering. By hosting virtual and physical events including AI/data science competitions and symposiums as well as always-on collaboration platforms, her organization interconnects and fosters a thriving federated community of practitioners who drive innovation across functions and geographies.
Jennifer holds an international MBA from Duke University with a concentration in Strategy and Bachelor’s in Economics and Art History from UC Davis
Business Talk | AI Expertise
How do we know which forecasts to trust for our most critical business decisions? When stakes are high, big data and machine learning techniques can drive significant value across a wide variety of applications. However, finding the right approach is difficult. A tempting solution may perform well in one context but poorly in others, rely on unavailable information, or incur impractical costs. Whether it’s demand forecasting, supply chain management, or any other application, getting it right requires balancing the need for performance with the constraints of implementation and complexity.
We will discuss why organizations are turning to data-driven approaches to forecasting, applications and types of solutions, and challenges (both technical and practical) that arise during implementation. Attendees will leave oriented towards:
– Identifying types of forecasting applications and issues;
– Understanding the range of techniques available and related challenges;
– Evaluating potential data-driven approaches for your business;
– Measuring performance in the context of business objectives…more details
Javed is an economist and data scientist with experience in banking, finance, forecasting, risk management, consulting, policy, and behavioral economics. He has led development of analytic applications for large organizations including Amazon and the Federal Reserve Board of Governors, and served as a researcher with the Office of Financial Research (U.S. Treasury). He holds a PhD in financial economics and MA in statistics from U.C. Berkeley, as well as undergraduate degrees in operations management and systems engineering from the University of Pennsylvania. Currently, Javed is a Senior Data Scientist on the Corporate Training team at Metis, where he works with companies to upskill their staff in data science and analytics.
Business Talk | AI Expertise
The bane of any organization deploying high quality machine learning technology is data wrangling. Data wrangling consists of data pre-processing, feature cleaning, and feature engineering. We estimate that data scientists spend upwards of 90% of their time wrestling with data making it the biggest bottleneck to widespread Machine Learning adoption. Automating aspects of data wrangling would dramatically increase the adoption of Machine Learning technology across enterprise organizations. In this talk, Alex Holub, PhD, draws upon his experience in both industry and academe to illustrate both why data wrangling is a challenge and some of the solutions being developed to automate the data wrangling…more details
Alex Holub is the Co-founder and CEO of Vidora, a Machine Learning company focused on automating data wrangling and enabling operational intelligence for everyone. He received his Ph.D. at Caltech in Machine Learning and Computer Vision, and has published over 20 peer-reviewed articles in the areas of Machine Learning, Statistics, and Artificial Intelligence. Prior to Caltech, Alex received undergraduate degrees from Cornell in Computer Science and Neurobiology, and spent one year as a visiting scientist at the Max-Planck Institute for Biological Cybernetics in Tuebingen.
Business Talk | AI Management
Given a million+ legitimate Wal-Mart Stores returns daily, identifying fraudulent returns in real-time with minimal customer friction is a challenging problem. One reason is that there is a lack of customer identity associated with in-store transactions. Besides, there are no confirmed fraud labels in situations where the fraudulent return is suspected. Finally, the customer is present when a decision to accept or deny the return is conveyed. Thus, incorrectly accusing the customer of return fraud typically insults the customer and damages customer relations. Accordingly, it would be desirable to provide an improved store return fraud detection system. We propose a system that supports intelligent detection of anomalous sequences of activities, together with comprehensive evaluation of distinct characteristics of fraudulent activities, enables the generation of high-confidence fraud labels to some activity patterns…more details
Henry Chen is a Director of Data Science at Walmart Labs. He leads a data science team to combat Walmart store returns fraud, Marketplace seller fraud, and E-Commerce payment fraud using machine learning and deep learning techniques. Prior to that he was a Senior Manager of Data Science at PayPal, the co-principal investigator of several NSF and DARPA sponsored research projects, and a technologist for top Fortune 100 companies in the San Francisco Bay Area. Henry Chen holds a M.S. and a Ph.D. from University of California at Berkeley.
Vidhya Raman is a Data Science Manager at Walmart Labs. Her focus areas at Walmart are on real-time fraud detection for Store Returns and Marketplace Sellers. Prior to Walmart Labs, Vidhya has been in the product data sciences world focusing on – A/B testing, Conversion and Product Launch experiment evaluation. She also has expertise in the analytics management consulting space. She has a Master’s degree in Information Technology Management from the University of Texas at Dallas and a Bachelor’s degree in Electrical Engineering from India.
Jingru Zhou is a Senior Data Scientist working in Walmart Labs, Inkiru group. Her focus is on fraud detection of the U.S. Walmart store returns utilizing machine learning techniques. She holds a Ph.D. in Electrical and Computer Engineering from the University of Utah, and her master and bachelor from University of Science and Technology of China.
Business Talk | AI Expertise
In today’s digital world, customers expect businesses to understand their needs. While this may sometimes sound like an exercise in clairvoyance, the truth is that many customers are able to articulate these expectations.
By using AI and machine learning to gather and analyse behavioral, social and transactional data; it is possible for organizations to develop a far deeper, more personal understanding of their customer, thus addressing their unique needs in a personal and relevant way.
This informative session will cover the challenges companies across industries are facing in driving AI-driven Digital Transformation and what successful organizations are doing to address those challenges in the real world. The challenges include the development of business cases, using data, scaling AI and ML, organizational structures to reduce friction, and re-orienting cultures…more details
Business Talk | AI Expertise
There is renewed interest among companies these days to implement and deploy AI models in their business processes either to increase automation, or to improve human productivity. AI models are making their way as chat bots in customer support scenarios, as doctors’ assistants in hospitals, as legal research assistants in legal domain, as marketing manager assistants in marketing, and as face detection applications in security domain, just to name a few use cases. Making AI work for enterprises requires a whole new and different set of concerns to be addressed than those for traditional software applications or for consumer-facing AI models such as targeted advertising and product recommendations…more details
Rama Akkiraju is an IBM Fellow, Master Inventor and IBM Academy Member, and a Director, at IBM’s Watson Division where she leads the AI operations team with a mission to scale AI for Enterprises. Rama also heads the AI mission of enabling natural, personalized and compassionate conversations between computers and humans. Rama has been named by Forbes as one of the ‘Top 20 Women in AI Research’ in May 2017, has been featured in ‘A-Team in AI’ by Fortune magazine in July 2018 and named ‘Top 10 pioneering women in AI and Machine Learning’ by Enterprise Management 360. In her career, Rama has worked on agent-based decision support systems, electronic market places, and semantic Web services, for which she led a World-Wide-Web (W3C) standard. Rama has co-authored 4 book chapters, and over 100 technical papers. Rama has 18 issued patents and 25+ pending. She is the recipient of 3 best paper awards in AI and Operations Research. Rama holds a Masters degree in Computer Science and has received a gold medal from New York University for her MBA for highest academic excellence. Rama served as the President for ISSIP, a Service Science professional society for 2018 and continues to actively drive AI projects through this professional society.
Business Talk | AI Management
When companies want to become great at most competencies – we want to be design driven! we want a great brand! – they often invest in building robust teams around those disciplines. When companies want to become more data driven, however, the instinct is different: the first focus is often on tooling and efficient scaling.
Nobody believes they can become a leading tech company and only hire a few engineers. Nobody believes they can be the next Apple by buying the design tools that Apple designers buy. Nobody believes they can have a brand like Nike by using their marketing automation tools. In these disciplines, companies understand that expertise comes from investing in the experts…more details
Benn Stancil is a cofounder and Chief Analyst at Mode, a company building collaborative tools for analysts. Benn is responsible for overseeing Mode’s internal analytics efforts. Benn is also an active contributor to the data science community, frequently helping data science teams build their technology stacks and establish data-driven cultures within their companies. In addition, Benn provides strategic oversight and guidance to Mode’s product direction as a member of the product leadership team.
Prior to Mode, Benn was a senior analyst at Microsoft and Yammer, where he helped lead product analytics. Benn also worked as an economic analyst at the Carnegie Endowment for International Peace, a think tank in Washington, DC.
Business Talk | AI Expertise
Because people label and save Pins to specific boards, they all add context to Pins every time they Pin, which helps Pinterest identify taste and the overlapping interests between people. The future of visual discovery and personalization. The work that goes into predicting what someone will love next (from style to beauty to traveling to Hawaii to chicken recipes), and powering a recommendations engine that surfaces billions of ideas to hundreds of millions of people. A deep dive into recent advancements in computer vision and their applications in commerce including Lens camera search, automated Shop the Look, Complete the Look and the evolution of visual embeddings…more details
Chuck Rosenberg is Head of Computer Vision at Pinterest where he leads the visual search team responsible for breakthroughs in computer vision and launching some of the first visual search products in the market including Lens camera search and the ability to visually search specific items in a Pin, or anywhere online. With a continued investment in AI and computer vision, Pinterest is a global platform working at a large scale, where images are central and computer vision is used across the product to not only identify objects, but predict what a person will want next, and inform recommendations in areas like commerce.
Prior to Pinterest, Chuck worked at Google for nearly 14 years where he was a Principal Engineer and Computer Vision Research Lead. At Google, he led the Image Understanding Group as well as the Image Search and Photo Search engineering teams. His projects included the company’s first large-scale image deep network deployment, search by image, and computer vision-based result ranking.
Previously, he worked at HP Labs and was one of the original members of iRobot. Chuck earned his P.h.D in Computer Science from Carnegie Mellon, with a focus on computer vision and machine learning.
Business Talk | AI Innovation
In this talk we provide an overview of how artificial intelligence/machine learning techniques are being used in life sciences research, biomedicine, and drug discovery. We highlight important specific applications of AI/ML techniques, across domains such as medical imaging, genomics from experimental data interpretation to understanding the genome, small molecule drug discovery. We also discuss recent advances using deep learning techniques to model protein sequences and structures, from basic scientific research to the design of novel proteins for chemical and therapeutic applications. We end with a brief overview of some open challenges in the field…more details
Mark DePristo is the Founder and CEO of BigHat Biosciences, an early-stage Bay Area startup applying AI/ML techniques to the design and optimization of next-generation antibodies. From 2016-2019, Mark DePristo founded and then led the Genomics team in Google Brain, which applies deep learning in TensorFlow to genomics problems to create tools such as DeepVariant and Nucleus and research like A deep learning approach to pattern recognition for short DNA sequences and Using deep learning to annotate the protein universe. Before joining Google he was Vice President of Informatics at SynapDx, a Google Ventures-backed startup developing a blood-based test for Autism. As Co-Director of Medical and Population Genetics at the Broad Institute from 2008-2013 Mark created and led the team that developed the GATK, the gold standard software for processing next-generation DNA sequencing data. He has a BA in Computer Science and Math from Northwestern University, a PhD in Biochemistry from the University of Cambridge as a Marshall Scholar, and did a postdoc at Harvard University in evolutionary biology. Dr. DePristo’s academic articles are widely published and have received more than 58,000 citations.
Business Talk | AI Innovation
The presentation will focus on best practices to develop Machine Learning powered applications that can move the needle on business critical KPIs. We will walk through a rapid prototyping framework to develop effective personalization experiences that customers find engaging, the mindsets and skills required to develop and execute on an innovation roadmap, how to continuously evaluate and work with vendors that provide ‘AI-powered’ solutions, and how to design online experiments to quickly iterate towards a better experience for customers.
During daytime, Pallav works as a Data Scientist and tries to extract meaningful signals from the noisy world we live in. As the moon rises and evening sets in all bets are off and one might find Pallav on his bike riding through the Berkeley hills in bright colored lycra or performing never-before-scenes of Dramedy with his Improv troupe.
Pallav is a part-time Human Centered Design Thinking coach and has helped non-profits and early-age startups develop clarity on their mission and recognize growth areas. He moved to the Bay Area in 2010 and somehow managed to acquire a Masters in Structural Engineering after spending two years actually learning how to think.
He is an avid follower of Seth Godin, Ken Robinson, and Nicholas Taleb, and is currently looking at ways to explain algorithms through cute, anthropomorphized animals.
Business Talk | AI Expertise
Astute investors have shifted their attention to explore the information content in unstructured data sets to differentiate their source of alpha. In this presentation, we will explore a number of sentiment- and behavioral-based signals using the content from earnings call transcripts via NLP that have historically demonstrated stock selection power in the U.S. market…more details
Frank is a Senior Director and a key member of S&P Global Market Intelligence’s Quantamental Research group. His primary focus is to conduct systematic alpha research on global equities with publications on natural language processing, newly discovered stock selection anomalies, event-driven strategies and industry-specific signals. Frank has master’s degrees in Financial Engineering from UCLA Anderson and in Finance from Boston College Carroll, and has undergraduate degrees in Computer Science and Economics from University of California, Davis.
Business Talk | AI Expertise
Artificial intelligence is making its way into budgets at enterprises and startups alike. Companies are ramping up on investments in AI and machine learning in the hopes of transforming their business with automated insights based on algorithms. But are they prepared to implement AI the right way? As AI implementations reach a broader set of companies, there are important lessons to be learned on how to avoid algorithms that are inherently biased, or that will make unethical or immoral conclusions based on skewed or misleading data…more details
Harry Glaser was the co-founder and CEO of Periscope Data, which merged with Sisense, the world’s leading independent platform for analytics builders, in May 2019. At Sisense, he now serves as CMO and General Manager of San Francisco. Harry was recently named one of San Francisco’s top ranked CEOs by Comparably, and Periscope Data has been recognized as one of the top companies in San Francisco for diversity and inclusion by CloserIQ. He is an active member of Founders for Change and Project Include, and led Periscope Data to be one of the original companies participating in 2016’s White House Equal Pay Pledge. Prior to founding Periscope Data in 2012, Harry was a Product Lead at Google AdWords and graduated from the University of Rochester with a bachelor’s degree in computer science.
Business Talk | AI Innovation
Voice-enabled technology has been around a few years now (“Hey Siri”), and consumers are increasingly embracing the technology as it weaves into their daily lives. But as more and more systems are becoming voice and AI enabled, we’re starting to witness a new shift from consumer-focused voice innovations to the enterprise. The goal of these solutions is to create new efficiencies that increase your capacity to be more productive so you can focus your energy and time on more important tasks, while AI technology handles the rest…more details
Omar Tawakol is Chief Executive Officer of Voicea. Prior to Voicea, Omar Tawakol was the founder and CEO of BlueKai which built the world’s largest consumer data marketplace and DMP. Their technology powers EVA, an enterprise voice assistant that works with more than 5,000 companies and partners with tools like Slack, Salesforce and BlueJeans. Oracle acquired BlueKai in 2014 & Omar served as the SVP & GM of the Oracle Data Cloud. Omar earned an MS in CS from Stanford (BS, MIT) where he researched and published work on AI agents.
Business Talk | AI Innovation
The latest AI advances have the potential to massively improve our health and well-being. However, most of the work is yet to be done. In this talk, we will explore the most important opportunities for AI in healthcare as well as specific challenges facing AI in healthcare. We will start by examining AI ability to diagnose major life-threatening conditions much earlier then other methods, sometimes years earlier. We will talk about AI ability to recommend dramatically more effective and less harmful treatment plans based on AI interpretation of patient’s medical history, treatment effectiveness and real time patient monitoring. Finally, we will talk about AI role in making our healthcare system effective and affordable for everyone. For each area, we will discuss both the latest progress made as well as the challenges yet to be solved…more details
Alex Ermolaev, Director of AI at Change Healthcare, has developed and led a variety of AI projects over the last 20 years, including enterprise AI, NLP, AI platforms/tools, imaging and self-driving cars. Alex is one of the most frequent “AI in Healthcare” speakers in the Silicon Valley. Change Healthcare is one of the largest healthcare technology companies in the world.
Business Talk | AI Innovation
With the advance of AI techniques and data to bring modeling into clinical bio-medicine, we will discuss a framework for assessing the maturity and knowledge gaps associated with these technologies. The goal of this presentation is to provide key information concerning the scientific, regulatory, legal, and cultural factors essential for successful introduction of AI in healthcare. We will also arm the audience member with critical methods to avoid false conclusions and exaggerated expectations associated with AI, and we will discuss select real-world examples.
Michael E. Zalis, MD, is Chief of Clinical Solutions and Strategy for One Brave Idea, an incubator focused on care delivery innovation and cardiovascular genomics. He is also a part-time interventional radiologist at Massachusetts General Hospital, where he has been on faculty for many years. At One Brave Idea, he leads efforts to develop software products and clinical solutions, develops business strategy in the areas of cardiovascular genomics and insurance arbitrage, and works to improve operations for chronic disease management and digital phenotyping. Previously, he was a founder and Chief Medical Officer of QPID Health, Inc., a venture-backed medical informatics software company. Dr. Zalis holds a BA in biophysics from the University of Pennsylvania and an MD from the University of Virginia Health Sciences Center. He is a Fellow of the American College of Radiology.
Business Talk | AI Management
Leaders across industry have been increasing investment in advanced analytics, data science, and AI. Yet, many have struggled to recognize a return on their investment.Many of these technical teams are making
transformative contributions to their companies, yet they aren’t being acknowledged for it. This usually occurs simply because their success not being properly measured.
Other teams are becoming frustrated because their successful data science projects are not being translated into success business projects. This often occurs because leaders are unable to differentiate high impact data science projects from low impact ones. Without the ability to do so, leaders cannot effectively lead a team to choose impactful projects…more details
As the Head of Corporate Training Executive Programs, Kerstin Frailey leads the executive, management, and data literacy program development at Metis. Prior to joining Metis she worked as a data scientists for the Data, Growth, and Marketplace teams at Postmates and as the Director of Data Science at GuideStar. She holds graduate degrees in statistics, mathematical statistics, and mathematical computer science form Cornell University and the University of Illinois at Chicago. She was a data science fellow at the University of Chicago and is PhD ABD in Statistics from Cornell.
Business Talk | AI Innovation
I will discuss the challenges in building real world AI products in today’s enterprise environment, and, in particular, the tradeoffs between “Discovery vs. Delivery.” It seems every company today wants AI; but plug-and-play AI offerings are far and few between. I will describe the balance between the R&D necessary to create bespoke products and get them working within existing IT deployment environment. Topics will include cultural differences between IT and AI, how to scope a successful R&D project, data mining vs product development, model governance, existing deployment solutions, and testing…more details
Charles Martin holds a PhD in Theoretical Chemistry from the University of Chicago. He was then an NSF Postdoctoral Fellow and worked in a Theoretical Physics group at UIUC that studied the statistical mechanics of Neural Networks. He currently owns and operates Calculation Consulting, a boutique consultancy specializing in ML and AI, supporting clients doing applied research in AI. He maintains a well-recognized blog on practical ML theory and he has to date supported and performed the work on Implicit and Heavy Tailed Self Regularization in Deep Learning.
Business Talk | AI Expertise
Transaction Data has immense potential to go beyond traditional data aggregation, by banks, to connecting the dots and providing valuable customer insights across industries. By acquiring financial data, and then cleansing and enriching it, organizations can derive insights that could solve business issues like supply chain gaps, identify financial lending opportunities, improve marketing efforts of a retail giant, identify growth opportunity for clients of investment research/PE/VC, mitigate losses of and much more. Enriched transaction data is the purest form of data providing real insights into what customers are doing and what they want to do with their funds. In this session, we will explore the importance of utilizing alternative data (such as transaction data) and applying machine learning algorithms to datasets to clarify and categorize the transactional data. Institutions can leverage this customer data to provide personal experience and advice…more details
Business Talk | AI Management
Companies adopt different organizational models for data science, sometimes organically, but oftentimes not. There are different trade-offs to each of these organizational structures. Completely decentralized, business-led data science teams can more responsively understand and attend to business needs, often lowering time to deployment and increasing the likelihood of operationalized solutions. However, several challenges can arise, including the lack of enterprise-wide adoption of data science, heavily siloed data and capabilities, and little to no cross-functional capabilities. Centralized teams can offer scale and deploy technology and infrastructure investments quickly, but at the risk of slower delivery time and a focus on technology, rather than business-driven, problems…more details
Business Talk | AI Management
ML product teams at Twitter have largely relied on their own feature engineering using their own data model and technology stacks. With the proliferation of ML applications, this approach no longer scales. Democratizing feature access was a key objective of the ML Platform build by Twitter Cortex to leverage feature investments across the company. In this talk we’ll share our story from genesis of the idea, to overcoming the technological challenges of finding the right abstractions to replace highly optimized custom pipelines, to thinking about incentives of how to make the marketplace work…more details
Wolfram “Wolf” Arnold has a Ph.D. in computational physics and has been a Silicon Valley veteran since the late dot com boom. He joined Twitter in 2013, and Twitter’s Cortex team in 2015. He was part of Cortex’s pivot to build an ML Platform organization in 2017, and founded the Feature Management team within Cortex whose flagship product has been Twitter’s Feature Store. The Feature Store lets any ML product team benefit from feature engineering investments across the company and has unlocked model improvements and top-line metrics gains in several product areas.
Business Talk | AI Expertise
The deployment of AI is truly transformational when it impacts the core business tasks and processes of the enterprise. For this reason, most organizations should only undertake AI initiatives with strategic impact potential. Experience shows that AI transformation programs achieve better results if organized in successive iterations of projects that implement high-value or even disruptive use cases. If managed properly, each project will create momentum and elements that will accumulate until critical mass is achieved. This iterative bottom-up approach is the most effective and realistic way of facing the daunting task of achieving the required AI proficiency within a reasonable time and cost…more details
Fernando Nunez-Mendoza, a serial technology entrepreneur and disruptor, is founder, chief executive officer, and chief technology officer of fonYou, a fast-growing international company born in Barcelona, Spain. fonYou’s mission is to build the mobile carrier of the future powered by AI. Before fonYou, he was a management consulting partner at Accenture and Diamond Cluster International helping global telecommunications, technology, and financial services firms embrace the internet and thrive in the brave new digital world. In his earlier career, Fernando worked for the European Space Agency and lectured and performed research in computer engineering and neural networks.
Fernando holds, MSEE and Ph.D. degrees in Electrical and Computer Engineering from the Polytechnic University of Catalonia (Spain), was an invited Visiting Scholar at Purdue University and is alumni of Stanford University Graduate School of Business.
Business Talk | AI Management
For a growing data organization, inbound requests for ad hoc data analysis is a good sign. It means people are eager to make data a central part of the business, and they trust your team to help them do it. And it’s an incredible opportunity to drive impact! But data teams frequently don’t grow as quickly as the companies that surround them, so how can you avoid becoming the victim of your own success? Reddit’s Data Science team found itself in this situation as the company was in rapid growth mode, with roughly 50 employees to each data scientist. This created an environment of constant microdistraction as the team attempted to keep up with the demand for pulling data and ad hoc analysis, and this distraction ultimately caused more problems than the ad hoc work was solving. The solution?..more details
Katie Bauer is a data scientist and engineer based in the San Francisco Bay Area with experience in search, digital advertising, online retail and consumer web. She currently works at Reddit, where she was a founding member of the Data Science team. She currently supervises the Data Science and Analytics On Call program, and has worked on everything from building analytics and experimentation infrastructure to modeling user behavior to managing of the data science internship program.
Talk | Research Frontiers | Deep Learning | Beginner-Intermediate
Reinforcement Learning (RL) has been applied to diverse problem domains with varied success. Applications to game domains have surpassed all expectations and achieved super-human performance, while applications to robotics and other areas involving control of physical systems have been more limited. This tutorial will discuss model-based methods that can substantially increase performance, both in terms of computational efficiency and quality of solutions. While RL relies on simulation models to sample data, models can do much more than generate samples. From a software perspective one can think of a physics simulator as a library providing many API function calls, only one of which is used to generate samples. Why not leverage the rest of the underlying functionality to develop better learning algorithms? This involves combining ideas from machine learning with ideas from geometry, physics, control theory and numerical optimization that go beyond learning from data. The tutorial will include examples from recent research projects, as well as demonstration of (not yet released) software called Optico – which is a unified framework for model-based optimization developed on top of the MuJoCo physics simulator…more details
Emo Todorov is Affiliate Professor of Computer Science & Engineering and Applied Mathematics at the University of Washington, where he directs the Movement Control Laboratory focusing on robot control and reinforcement learning. He has made earlier contributions to neuroscience, cognitive science, control theory and numerical optimization. He is founder of Roboti LLC, developing the MuJoCo physics simulator which is widely used in RL and robotics.
Talk | Deep Learning | AI for Engineers | Intermediate – Advanced
A portion of gross merchandise volume in e-commerce is driven by categories such as fashion, home, furniture, lifestyle and apparel. These categories differ from other categories in terms of need for discovery and the way people making purchase decisions. They require visual appeal and visual discovery where people want to navigate them through how similar they are in terms of visual appearance rather than any other attribute of the products.
Most of the search engines and e-commerce search engines provide only text based searches and it utilizes textual attributes of products. This does not provide a good search and recommendations experience for categories that are rich in terms of visual information…more details
Bugra is tech lead at Jet.com where he works on search and recommender systems. Prior to Jet.com, he was leading recommender systems at Hinge. He received B.S from Bilkent University and M.Sc from New York University focusing signal processing and machine learning.
He has two open source Python packages.
Workshop | Machine Learning | Data Visualization | Intermediate
With the recent popularity of machine learning algorithms such as neural networks and ensemble methods, etc., machine learning models become more like a ‘black box’, harder to understand and interpret. To gain the end user’s trust, there is a strong need to develop tools and methodologies to help the user to understand and explain how predictions are made. Data scientists also need to have the necessary insights to learn how the model can be improved. Much research has gone into model interpretability and recently several open sources tools, including LIME, SHAP, and GAMs, etc., have been published on GitHub. In this talk, we present Microsoft’s brand new Machine Learning Interpretability toolkit which incorporates the cutting-edge technologies developed by Microsoft and leverages proven third-party libraries. It creates a common API and data structure across the integrated libraries and integrates Azure Machine Learning services. Using this toolkit, data scientists can explain machine learning models using state-of-art technologies in an easy-to-use and scalable fashion at training and inferencing time…more details
Mehrnoosh Sameki is a technical program manager at Microsoft responsible for leading the product efforts on machine learning transparency within the Azure Machine Learning platform. Prior to Microsoft, she was a data scientist in an eCommerce company, Rue Gilt Groupe, incorporating data science and machine learning in retail space to drive revenue and enhance personalized shopping experiences of customers and prior to that, she completed a PhD degree in computer science at Boston University. In her spare time, she enjoys trying new food recipes, watching classic movies and documentaries, and reading about interior design and house decoration.
Tutorial | Research Frontiers | Deep Learning | Intermediate – Advanced
Autonomous driving has been an active area of research and development over the last decade. Despite considerable progress, there are many open challenges including automated driving in dense and urban scenes. In this talk, we give an overview of our recent work on simulation and navigation technologies for autonomous vehicles. We present a novel simulator, AutonoVi-Sim, that uses recent developments in physics-based simulation, robot motion planning, game engines, and behavior modeling. We describe novel methods for interactive simulation of multiple vehicles with unique steering or acceleration limits taking into account vehicle dynamics constraints. In addition, AutonoVi-Sim supports navigation for non-vehicle traffic participants such as cyclists and pedestrians AutonoVi-Sim also facilitates data analysis, allowing for capturing video from the vehicle’s perspective, exporting sensor data such as relative positions of other traffic participants, camera data for a specific sensor, and detection and classification results…more details
Dinesh Manocha is the Paul Chrisman Iribe Chair in Computer Science & Electrical and Computer Engineering at the University of Maryland College Park. He is also the Phi Delta Theta/Matthew Mason Distinguished Professor Emeritus of Computer Science at the University of North Carolina – Chapel Hill. He has won many awards, including Alfred P. Sloan Research Fellow, the NSF Career Award, the ONR Young Investigator Award, and the Hettleman Prize for scholarly achievement. His research interests include multi-agent simulation, virtual environments, artificial intelligence, and robotics. His group has developed a number of packages for multi-agent simulation, crowd simulation, and physics-based simulation that have been used by hundreds of thousands of users and licensed to more than 60 commercial vendors. He has published more than 510 papers and supervised more than 36 PhD dissertations. He is an inventor of 10 patents, several of which have been licensed to industry. His work has been covered by the New York Times, NPR, Boston Globe, Washington Post, ZDNet, as well as DARPA Legacy Press Release. He is a Fellow of AAAI, AAAS, ACM, and IEEE, member of ACM SIGGRAPH Academy, and Pioneer of Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016.
Talks | Data Science Management | Intermediate-Advanced
The last decade saw a massive revolution in the use of data, with companies effectively leveraging it to transform their business and industries as a whole. But a fully streamlined process for consistently and easily building and deploying new models is rare. Why?
Early in my career, I worked in dozens of organizations, building models and teams to drive transformation, getting to the first MVPs: the minimum viable product, built on a minimum viable platform, using the the minimum viable (and most valuable) players – a small team of skilled individuals. It was challenging, and getting beyond MVP seemed, well, unviable…more details
Panel | Deep Learning | Open Source | All levels
Despite expanding regulations, surveillance tech and Face ID deployments continue to grow geometrically. Countries such as the US, China, and the UK are leading the way, while the rest of the world is not far behind. Biometric data and algorithms are at the epicenter of a perfect storm, as rapid technological advances meet rising privacy concerns. At the one end, identification technology like face recognition can provide consumers both convenience and secure authentication. At the other end, biometric databases and infrastructure have become attractive targets for hackers. In a new twist, open source AI tools are now available that can help malicious actors manipulate face and voice data to create deep fakes. These fakes can be weaponized by rogue nation states for disinformation campaigns – to destabilize financial markets or manipulate elections…more details
George is Director of Computing and Data Science at GSI Technology, an embedded hardware and artificial intelligence company. He’s held senior leadership roles in software, data science, and research, including tenures at Apple’s New Product Architecture group and at New York University’s Courant Institute. He can talk on a broad range of topics at the intersection of e-commerce, machine learning, software development, and cloud security. He is an author on several research papers in computer vision and deep learning, published at NIPS, CVPR, ICASSP, and SIGGRAPH.
Matthew Zeiler, Founder and CEO of Clarifai, is a machine learning Ph.D. and thought leader pioneering the field of applied artificial intelligence (AI). Matt’s groundbreaking research in computer vision alongside renowned machine learning experts Geoff Hinton and Yann LeCun has propelled the image recognition industry from theory to real-world application. Since starting Clarifai in 2013, Matt has evolved his award-winning research into developer-friendly products that allow enterprises to quickly and seamlessly integrate AI into their workflows and customer experiences. Today, Clarifai is the leading independent AI company and “widely seen as one of the most promising [startups] in the crowded, buzzy field of machine learning.” (Forbes) Reach him @MattZeiler.
Mark is the CEO and Co-founder of Smile Identity, a leading provider of digital authentication and KYC services across Africa. Smile’s facial recognition SDKs and ID verification APIs enable banks, telecoms and fintechs to confirm the true identity of any smartphone user with just a Smartselfie™. Previously, Mark led the Khosla Impact Fund, with investments in payments, solar, lending and ecommerce across Africa and India. He began his career in investment banking and venture capital in Silicon Valley at Bank of America and then Draper Fisher Jurvetson. He is passionate about the power of technology and entrepreneurship to transform emerging markets.
Alex Comerford is a Data Scientist at Bloomberg. He has built custom data-driven cyber-threat detection strategies, most recently as a data scientist at Capsule8. He continues to be a thought leader in cybersecurity, presenting regularly on topics at the intersection of open-source software, AI, and advanced threat detection. Most recently, he was a speaker at Anacondacon2019. Alex is a graduate of SUNY Albany in Nanoscale Engineering.
Giorgio Patrini is CEO and Chief Scientist at Deeptrace, an Amsterdam-based cybersecurity startup building deep learning technology for detecting and understanding fake videos. Previously, he was a postdoctoral researcher at the University of Amsterdam, working on deep generative models; and earlier at CSIRO Data61 in Sydney, Australia, building privacy-preserving learning systems with homomorphic encryption. He obtained his PhD in machine learning at the Australian National University. In 2012 he cofounded Waynaut, an Internet mobility startup acquired by lastminute.com in 2017.
Workshop | Deep Learning | Intermediate
Reinforcement learning considers the problem of learning to act and is poised to power next generation AI systems, which will need to go beyond input-output pattern recognition (even if such simpler AI has sufficed for speech, vision, machine translation) and will have to generate intelligent behavior. Example application domains include robotics, marketing, dialogue, HVAC, optimizing healthcare and supply chains.
In this tutorial we will cover the foundations of Deep RL (including, but not limited to: CEM, DQN, TRPO, PPO, SAC) as well as dive into the specifics of some of the main success stories and provide perspective on where the field is headed.
To get the most out of this tutorial, the audience is assumed to have basic familiarity with neural networks, optimization, probability…more details
Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel’s research strives to build ever more intelligent systems, which has his lab push the frontiers of deep reinforcement learning, deep imitation learning, deep unsupervised learning, transfer learning, meta-learning, and learning to learn, as well as study the influence of AI on society. His lab also investigates how AI could advance other science and engineering disciplines. Abbeel’s Intro to AI class has been taken by over 100K students through edX, and his Deep RL and Deep Unsupervised Learning materials are standard references for AI researchers. Abbeel has founded three companies: Gradescope (AI to help teachers with grading homework and exams), Covariant (AI for robotic automation of warehouses and factories), and Berkeley Open Arms (low-cost, highly capable 7-dof robot arms), advises many AI and robotics start-ups, and is a frequently sought after speaker worldwide for C-suite sessions on AI future and strategy. Abbeel has received many awards and honors, including the PECASE, NSF-CAREER, ONR-YIP, Darpa-YFA, TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.
Workshop | Research Frontiers | Deep Learning | Advanced
Integration of data from multiple sources, with and without labels, is a fundamental problem in transfer learning when models must be trained on a source data distribution that differs from one or more target data distributions. For example, in healthcare, models must flexibly inter-operate on large scale medical data gathered across multiple hospitals, each with confounding biases. Domain adaptation is a method for enabling this form of transfer learning by
simultaneously identifying deep feature representations that are invariant across domains (data sources), thereby enabling transfer learning to unseen data distributions.
In this workshop, we will teach attendees how to use domain adaptation for machine learning applications in computer vision for healthcare. More specifically, we will introduce scAlign, our recently developed domain adaptation approach that can integrate data from multiple sources in either a fully unsupervised, semi supervised and fully supervised fashion…more details
Gerald Quon is an Assistant Professor in the Department of Molecular and Cellular Biology at the University of California at Davis. He obtained his Ph.D. in Computer Science from the University of Toronto, M.Sc. in Biochemistry from the University of Toronto, and B. Math in Computer Science from the University of Waterloo. He also completed postdoctoral research training at MIT. His lab focuses on applications of machine learning to human genetics, genomics and health, and is funded by the National Science Foundation, National Institutes of Health, the Chan Zuckerberg Initiative, and the American Cancer Society.
Talks | DevOps & Management | Machine Learning | All levels
Most AI/ML projects start shipping models into production, where they can deliver business value, using the no-process process. That is, people just do their best by creating an ad-hoc process with familiar tools. This works for tiny teams at first, but as the team grows you’ll discover significant chaos and pain trying to operationalize AI.
We’ve been here before. Software development in the 90s was a lot like the “no-process process” for ML today. And just as the paradigm shift known as DevOps brought reproducibility, collaboration and continuous delivery to software, applying the same principles to ML can bring the same benefits to AI projects. Without it, AI projects fail and create financial and reputational risk…more details
Nick has been a data scientist since the early 2000s. After obtaining an undergraduate degree in geology at Cambridge University in England (2000), he completed Masters (2001) and PhD (2004) degrees in Astronomy at the University of Sussex, then moved to North America, completing postdoctoral positions in Astronomy at the University of Illinois at Urbana-Champaign (2004-9, joint with the National Center for Supercomputing Applications), and the Herzberg Institute of Astrophysics in Victoria, BC, Canada (2009-2013). He joined Skytree, a startup company specializing in machine learning, in 2012, and in 2017 the Skytree technology and team was acquired by Infosys. Machine learning has been part of his work since 2000, first applying it to large astronomical datasets, followed by wide ranges of application as a generalist data scientist at Skytree, Infosys, Oracle, and now Dotscience.
Talk | Research Frontiers | Machine Learning | Advanced
To gain an edge in the markets quantitative hedge fund managers require automated processing to quickly extract actionable information from unstructured and increasingly non-traditional sources of data. The nature of these “alternative data” sources presents challenges that are comfortably addressed through machine learning techniques. We illustrate use of AI and ML techniques that help extract derived signals that have significant alpha or risk premium and lead to profitable trading strategies.
This session will cover the following topics:
The broad application of machine learning in finance
Extracting sentiment from textual data such as news stories and social media content using machine learning algorithms
Construction of scoring models and factors from complex data sets such as supply chain graph, options (implied volatility skew, term structure), Geolocational datasets and ESG (Environmental, Social and Governance)
Robust portfolio construction using multi-factor models by blending in factors derived from alternative data with the traditional factors such as fama-french five-factor model…more details
Dr. Arun Verma joined the Bloomberg Quantitative Research group in 2003. Prior to that, he earned his Ph.D from Cornell University in the areas of computer science & applied mathematics. At Bloomberg, Mr. Verma’s work initially focused on Stochastic Volatility Models for Derivatives & Exotics pricing. More recently, he has enjoyed working at the intersection of diverse areas such as data science (with structured & unstructured data), innovative quantitative models across all asset classes & using machine learning methods to help reveal embedded signals in financial data.
Talk | Research Frontiers | Deep Learning | Intermediate – Advanced
Deep neural networks now enable machines to learn to solve problems that were previously been easy for humans but difficult for computers, like playing Atari games or identifying lions and jaguars in photos. But how do these neural nets actually work? What concepts do they learn en route to their goals? We built and trained the networks, so on the surface these questions might seem trivial to answer. However, network training dynamics, internal representations, and mechanisms of computation turn out to be surprisingly tricky to study and understand, because networks have so many connections — often millions or more — that the resulting computation is fundamentally complex.
This high fundamental complexity enables the models to master their tasks, but we find now that we need something like neuroscience just to understand the AI models that we’ve constructed!..more details
Jason Yosinski is a machine learning researcher, founding member of Uber AI Labs, and scientific adviser to Recursion Pharmaceuticals. His work focuses on building more capable and more understandable AI. As scientists and engineers build increasingly powerful AI systems, the abilities of these systems increase faster than does our understanding of them, motivating much of his work on AI Neuroscience — an emerging field of study that investigates fundamental properties and behaviors of AI systems. Dr. Yosinski completed his PhD as a NASA Space Technology Research Fellow working at the Cornell Creative Machines Lab, the University of Montreal, Caltech/NASA Jet Propulsion Laboratory, and Google DeepMind. His work on AI has been featured on NPR, Fast Company, the Economist, TEDx, and on the BBC. Prior to his academic career, Jason cofounded two web technology companies and started a program in the Los Angeles school district that teaches students algebra via hands-on robotics. In his free time, Jason enjoys cooking, sailing, reading, paragliding, and sometimes pretending he’s an artist.
Talk | Machine Learning | Research Frontiers | Intermediate
Representation learning, colloquially known as embeddings, has emerged as an important unifying theme in the machine learning community and is widely used in communities ranging from social media to computer vision and natural language processing. The core idea is to leverage large quantities of context-rich data, whether labeled or unlabeled, to ‘embed’ data points into vectors. These vectors can then serve as feature sets for classic machine learning classifiers like Logistic Regression. Embedding algorithms like word2vec and DeepWalk have yielded impressive results in natural language and graph processing pipelines. In the research community, there is a concerted effort now to build faster and better embedding algorithms for all kinds of heterogeneous datasets, including videos with tags and annotations, social media data and tables…more details
Mayank Kejriwal is a research scientist and lecturer at the University of Southern California’s Information Sciences Institute (ISI). He received his Ph.D. from the University of Texas at Austin. His dissertation involved Web-scale data linking, and in addition to being published as a book, was recently recognized with an international Best Dissertation award in his field. His research is highly applied and sits at the intersection of knowledge graphs, social networks, Web semantics, network science, data integration and AI for social good. He has contributed to systems that are being used by both DARPA and by law enforcement, and he has active collaborations in both academia and industry. He is currently co-authoring a textbook on knowledge graphs (MIT Press, 2018), and has delivered tutorials and demonstrations at numerous conferences and venues, including KDD, AAAI, and ISWC.
Talks | Data Science Management | All Levels
Many businesses today are using traditional AI as an advanced modeling tool to create insights for critical business decision making. But insights alone are not enough. Evolutionary AI provides prescriptive guidance for your decision making to get you to optimized outcomes. Today, we can produce optimal solutions for the most complex multi-objective search-spaces by applying the creative and efficient capabilities of evolutionary computation. This session will show how such decision augmentation can be achieved via step by step hands-on examples, demos and interactive white-boarding to scope out several use cases of interest for the audience…more details
Babak Hodjat (https://en.wikipedia.org/wiki/Babak_Hodjat) is VP of Evolutionary AI at Cognizant, and former co-founder and CEO of Sentient, responsible for the core technology behind the world’s largest distributed artificial intelligence system. Babak was also the founder of the world’s first AI-driven hedge-fund, Sentient Investment Management. Babak is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist. Prior to co-founding Sentient, Babak was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering. Prior to Sybase, Babak was co-founder, CTO and board member of Dejima Inc. Babak is the primary inventor of Dejima’s patented, agent-oriented technology applied to intelligent interfaces for mobile and enterprise computing – the technology behind Apple’s Siri. Babak is a published scholar in the fields of Artificial Life, Agent-Oriented Software Engineering, and Distributed Artificial Intelligence, and has 31 granted or pending patents to his name. He is an expert in numerous fields of AI, including natural language processing, machine learning, genetic algorithms, distributed AI, and has founded multiple companies in these areas. Babak holds a PhD in Machine Intelligence from Kyushu University, in Fukuoka, Japan.
Workshop | Deep Learning | Machine Learning | Intermediate
Natural language processing is a key component in many data science systems that must understand or reason about text. Common use cases include question answering, paraphrasing or summarization, sentiment analysis, natural language BI, and entity extraction. This talk introduces the open-source Spark NLP library, which within two years have become the most widely used NLP library in the enterprise – by implementing state-of-the-art deep learning NLP research as a production-grade, fast and scalable library for Python, Java and Scala.
Spark NLP natively extends the Spark ML pipeline API’s which enabling zero-copy, distributed, combined NLP & ML pipelines, which leverage all of Spark’s built-in optimizations. Benchmarks and design best practices for building NLP, ML and DL pipelines on Spark will be shared. The library implements core NLP algorithms including lemmatization, part of speech tagging, dependency parsing, named entity recognition, spell checking and sentiment detection…more details
David Talby is a chief technology officer at Pacific AI, helping fast-growing companies apply big data and data science techniques to solve real-world problems in healthcare, life science, and related fields. David has extensive experience in building and operating web-scale data science and business platforms, as well as building world-class, Agile, distributed teams. Previously, he was with Microsoft’s Bing Group, where he led business operations for Bing Shopping in the US and Europe, and worked at Amazon both in Seattle and the UK, where he built and ran distributed teams that helped scale Amazon’s financial systems. David holds a PhD in computer science and master’s degrees in both computer science and business administration.
Workshop | Machine Learning | Intermediate- Advanced
Gradient Boosted Trees have become a widely used method for prediction using structured data. They generally provide the best predictive power, but are sometimes criticized for being “difficult to interpret”. However, to some degree, this criticism is misdirected — rather than being uninterpretable, they simply have more complicated interpretations, reflecting a more sophisticated understanding of the underlying dynamics of the variables. In this workshop, we will work hands-on using XGBoost with real-world data sets to demonstrate how to approach data sets with the twin goals of prediction and understanding in a manner such that improvements in one area yield improvements in the other. Using modern tooling such as Individual Conditional Expectation (ICE) plots and SHAP, as well as a sense of curiosity, we will extract powerful insights that could not be gained from simpler methods. In particular, attention will be placed on how to approach a data set with the goal of understanding as well as prediction…more details
Brian Lucena is Principal at Lucena Consulting and a consulting Data Scientist at Agentero. An applied mathematician in every sense, he is passionate about applying modern machine learning techniques to understand the world and act upon it. In previous roles he has served as SVP of Analytics at PCCI, Principal Data Scientist at Clover Health, and Chief Mathematician at Guardian Analytics. He has taught at numerous institutions including UC-Berkeley, Brown, USF, and the Metis Data Science Bootcamp.
Workshops | Machine Learning | Open Source | Beginner – Intermediate
RAPIDS is an open source initiative to accelerate the complete end-to-end data science ecosystem with GPUs. It consists of several projects that expose familiar interfaces, making it easy to accelerate the entire data science pipeline – from the ETL and data wrangling to feature engineering, statistical modeling, machine learning, and graph analysis.
This presentation targets data scientists familiar with the Python data science ecosystem, which includes Pandas, Numpy, and Scikit-learn. A very brief overview of the RAPIDS ecosystem will get us kicked off, followed by an in-depth overview of cuML, the RAPIDS machine learning library.
Novice data scientists, who are new to the RAPIDS ecosystem, will benefit from a great introduction to the ease at which cuML can accelerate their existing sklearn workflows. Intermediate and advanced data scientists will gain a better understanding of cuML’s flexible architecture, including how it can be used to scale machine learning workloads across multiple GPUs and multiple nodes…more details
Talks | AI for Engineers | All levels
Just as we’ve gotten settled into best practices for software design and delivery across a variety of platforms (desktop, web, mobile, cloud), we get another “opportunity” thrown in the works: AI and machine learning.
<30% of AI projects are seeing the light of the “in production” day, despite many showing extremely positive results. AI can and should have better numbers than this. AI can and should fit into our regular software and systems development processes.
The core problem… most companies are trying AI as a new “beast”, when they should be integrating AI into their normal development and IT processes. This talk will go through what is different about adding AI into your software, while also highlighting what is the same – so you know what not to change! I’ll cover the diffs on:
• Product (project) selection to definition
• Who plays what roles
• Users gonna use – or not
• How to drive simplicity in complexity…more details
Jana Eggers is the CEO of the neuroscience-inspired artificial intelligence platform company, Nara Logics. She’s an experienced tech exec focused on inspiring teams to build great products. She’s started and grown companies, and led large organizations at public companies. She active in customer-inspired innovation, the artificial intelligence industry, the Autonomy/Mastery/Purpose-style leadership.
She’s held technology and executive positions at Intuit, Blackbaud, Los Alamos National Laboratory (computational chemistry and super computing), Basis Technology (internationalization technology), Lycos, American Airlines, Spreadshirt (ecomm)
Workshops | Machine Learning | Open Source | Intermediate
For BlueVine, and indeed for any Fintech company, figuring out the client’s industry is a critical factor in making precise financial decisions. Traditional sources are invariably pricey, inaccurate and unavailable, and as such leave an opening for an ML based solution. We met that challenge building a service that predicts the industry using the business’s publicly available web data. By employing the latest innovations in NLP (BERT) and the some of the most powerful scraping and deployment tools available (Scrapy and Amazon SageMaker) we were able to dramatically surpass the performance achieved by any other such tool in the space.
This presentation will cover the entire development pipeline hands-on: Crowdsourcing a tagged sample, building a smart and scalable web scraper, prepping and feeding the resulting raw data into BERT, fine tuning the model and finally deploying it as a cloud based service behind an API. Both model training and deployment will be through Amazon SageMaker…more details
Ido Shlomo is the head of BlueVine’s data science team in the US, where he works on applying machine learning and other automation solutions for risk management, fraud detection and marketing purposes. Recent work is focused on implementing complex NLP tasks in production systems, and specifically on dealing with the the challenge of consuming unstructured data. Previously Ido worked in the Economics department at Tel Aviv University as a researcher in structural macroeconomic modeling. Ido holds a dual BA in mathematics and philosophy and an MA in economics, both from Tel Aviv University.
Talks | Machine Learning | Intermediate-Advanced
Machine Learning (ML) on devices is leading a paradigm shift in the world of machine learning. The driving force behind this is the goal to bring ML closer to user data to ensure privacy and a good user experience through sub-second latencies. We are witnessing innovations across the stack from hardware to applications; Number of frameworks like coreML, WinML, TFLite are emerging to solve the problem but, as an enterprise, trying to adopt a strategy around this is not easy. Applications running on variety of laptops and mobile devices need number of aspects to be considered while designing and architecting a solution to solve this at scale. In this session, we will cover some of these aspects to furnish real results and keep up with the evolving ecosystem…more details
Talks | Machine Learning | DevOps & Management | Beginner-Intermediate
We are in the age of data. In recent years, many companies have already started collecting large amounts of data about their business. Many other companies are starting now.
However, you know that before you can train any decent supervised model you need ground truth data. Usually, supervised ML models are trained on old data records that are already somehow labeled. And this is the ugly truth: before proceeding with any model training, any classification problem definition, or any further enthusiasm in gathering data, you need a sufficiently large set of correctly labeled data records to describe your problem. And data labeling – especially in a sufficiently large amount – is … expensive…more details
Paolo Tamagnini currently works as a data scientist at KNIME.
Paolo holds a master degree in data science and research experience in data visualization techniques for machine learning interpretability.
Talk | Deep Learning | Intermediate-Advanced
Deep learning practitioners spend most of their time troubleshooting & debugging. Troubleshooting models is notoriously difficult because the same performance problem can be attributed to many different sources, and performance can be extremely sensitive to small changes in architecture and hyperparameters. In this talk, I will attempt to demystify the troubleshooting process by presenting a decision tree for improving your model’s performance…more details
Josh is a Research Scientist at OpenAI working at the intersection of machine learning and robotics. His research focuses on applying deep reinforcement learning, generative models, and synthetic data to problems in robotic perception and control.
Additionally, he co-organize a machine learning training program for engineers to learn about production-ready deep learning called Full Stack Deep Learning.
Josh did his PhD in Computer Science at UC Berkeley advised by Pieter Abbeel. He have also been a management consultant at McKinsey and an Investment Partner at Dorm Room Fund.
Talk | Machine Learning | All Levels
We will discuss several problems related to the challenge of making accurate inferences about a complex phenomenon, in the regime in which the amount of available data (i.e the sample size) is too small for the empirical distribution of the data to be an accurate representation of the phenomenon in question. This challenge arises in many settings involving high-dimensional data, or setting for which there are a large number of rarely observed elements (e.g. a significant fraction of words encountered in a typical text corpus occur only once or twice in the corpus, similarly for rare genetic mutations in genomic datasets, etc.). We show that for several fundamental and practically relevant settings, it is possible to “denoise” the empirical distribution of the data significantly. As a component of this approach, we describe how one can make accurate inferences about the “unseen” portion of the distribution, corresponding to events that were never observed in the given dataset…more details
Gregory is an Assistant Professor in Stanford’s Computer Science Department, working at the intersection of Algorithms, Machine Learning, Statistics, and Information Theory. One of the main themes of his my work is the design of efficient algorithms for accurately inferring information about complex distributions, given limited amounts of data, or limits on other resources such as the computation time, available memory or communication, or the quality of the available date. Prior to joining Stanford, I was a postdoc at Microsoft Research, New England, and received my PhD from Berkeley in Computer Science.
Talks | Data Visualization | Beginner-Intermediate
The ultimate goal in any sport is to win. Coaches and athletes must be striving to be able to compete at peak performance. The field of sport science is a rapidly growing field that looks to address the questions that come with trying to help athletes be at their best when it matters most. This takes a strong understanding of exercise physiology as well as data science in order to try and find answers to the questions that arise. Some questions may be how hard does an athlete need to train on a given day? Or how does a practice session compare to a game in terms of physical demand? These are the types of questions that we try to answer using HPCC Systems with our data at NC State University Strength and Conditioning…more details
Christopher Connelly has a Masters in Exercise Science and Nutrition from Sacred Heart University. He has been working with NC State Strength and Conditioning for over two years where he has built a platform for athlete data monitoring in python. He has formerly worked with the NC Courage professional soccer team doing GPS and data monitoring. His graduate work was primarily in biomechanics and movement analysis where he did data analysis with 3D movement testing and electromyography. He has his CSCS certification with the NSCA as well as level 1 coach with USA weightlifting. Chris took part in the HPCC Systems summer internship program working on a project for cleaning and analysis of collegiate soccer GPS data in HPCC Systems.
Talk | Deep Learning | Machine Learning | Intermediate
Distributed representations of words has proven to be successful in addressing previous drawbacks of symbolic representations which treated words as atomic units of meaning. Symbolic representations treat words like an island, unable to capture similarity and relatedness information between. Word representations like Word2Vec, GLoVE, Fasttext on the other hand use distributional semantics and learn compressed representations of words by accounting contextual information. These representations are able learn meaningful analogical and lexical relationship which make them a popular choice in downstream NLP tasks…more details
Sanjana started off with leveraging machine learning algorithms for political data where she worked on drawing inferences from expenditures data for the presidential election cycle. She received her Masters in Computer Science with a specialization in Artificial Intelligence and spent a year working on her thesis for learning to identify writers based on handwriting using neural networks. Sanjana now works on Conversational AI, researching and developing NLP techniques for Mya Systems. Having worked on the applications of machine/deep learning on political and forensic data, she intends to identify and work on unique problems that can be solved using deep learning/machine learning.
Talk | Deep Learning | Intermediate-Advanced
In this talk we explore how in context encoding algorithms starting with the breakthroughs over the past 18 months impact real world use case applications. We will do a deep dive into transformer based architecture with technical focus on the registerables implementation paradigm and compare performance with prior art for automated “Fake News” evaluation using contemporary deep learning article encoding. We explore how these techniques provide unique interpretability for the FakeNews use case and close with a discussion of extensions of these techniques to time series forecasting and telemetry monitoring…more details
Mike serves as Chief ML Scientist and Head of Machine Learning for SIG, UC Berkeley Data Science faculty, and Director of Phronesis ML Labs. He has led teams of Data Scientists in the bay area as Head of Data Science at Uber ATG, Chief Data Scientist for InterTrust and Takt, Director of Data Science for MetaScale/Sears, and CSO for Galvanize where he founded the galvanizeU-UNH accredited Masters in Data Science degree and oversaw the company’s transformation from co-working space to Data Science organization. Mike began his career in academia serving as a mathematics teaching fellow for Columbia University before teaching at the University of Pittsburgh
Talk | Research Frontiers | Machine Learning | Intermediate-Advanced
Biology and medicine are deluged with data so that techniques from machine learning and statistics will increasingly play a key role in extracting insights from the vast quantities of data being generated. I will provide an overview of the modeling and inferential challenges that arise in these domains.
In the first part of my talk, I will focus on machine learning problems arising in the field of genomics. The cost of genome sequencing has decreased by over 100,000 fold over the last decade. Availability of genetic variation data from millions of individuals has opened up the possibility of using genetic information to identifying the cause of diseases, developing effective drugs, predicting disease risk and personalizing treatment…more details
Sriram Sankararaman is an assistant professor in the Departments of Computer Science, Human Genetics, and Computational Medicine at UCLA where he leads the machine learning and genomic lab. His research interests lie at the interface of computer science, statistics and biology and is interested in developing statistical machine learning algorithms to make sense of large-scale biomedical data and in using these tools to understand the interplay between evolution, our genomes and traits. He received a B.Tech. in Computer Science from the Indian Institute of Technology, Madras, a Ph.D. in Computer Science from UC Berkeley and was a post-doctoral fellow in Harvard Medical School before joining UCLA. He is a recipient of the Alfred P. Sloan Foundation fellowship (2017), Okawa Foundation grant (2017), the UCLA Hellman fellowship (2017), the NIH Pathway to Independence Award (2014), a Simons Research fellowship (2014), and a Harvard Science of the Human Past fellowship (2012) as well as the Northrop-Grumman Excellence in Teaching Award at UCLA (2019).
Talk | Open Source | Beginner-Intermediate
The goal of this talk is to demystify the application of artificial intelligence in the security industry. I will address common misconceptions and detail common use cases, while attempting to cut through the hype and inflated marketing claims for AI systems. I will walk through coding examples for training predictive models including spam detection and malware classification. In addition to discussing the benefits, I will also discuss potential pitfalls and challenges. The end of the talk will flip the thesis to discuss applications (or lack thereof) of cybersecurity in AI, detailing famous adversarial attacks on AI systems and methods to mitigate such attacks. Members of the target audience have a curiosity about how AI methodologies are applied in cybersecurity, but need not be experts…more details
Blowing stuff up in Exponent’s controlled burn room in the morning and feature engineering for deep learning computer vision models in the afternoon. This is not an atypical day for Dr. Dustin Burns, a Senior Scientist consultant in Exponent’s Statistical and Data Sciences practice. This week, he may be developing machine learning risk models to predict failure of assets of a utilities company, and next week he may be flying to the Middle East for targeted data collection for a consumer electronics company. Combining his background in laboratory experiments with his expertise in data analytics, Dr. Burns contributes to projects along the entire data science lifecycle, from experimental design and data collection, through data quality assurance, exploratory data analysis and cleaning, to modeling, visualization, and reporting.
Dustin received his Ph.D. in Physics from the University of California, Davis, in the field of experimental high-energy particle and astroparticle physics. Dustin’s dissertation research was based at the CERN Large Hadron Collider (LHC), where he worked on the team that contributed to the discovery of the Higgs boson in 2012. Dustin is a founding member of the CRAYFIS: Cosmic RAYs Found In Smartphones (http://crayfis.io) experiment, where he applied modelling and statistical techniques to design a crowd-sourced comic ray detector array using the cameras in smartphones.
In his current role at Exponent, Dustin leads a multidisciplinary team of Ph.D.’s across many industries and government agencies to respond to the world’s most impactful problems and evaluate emerging technologies using AI. The team of statisticians, machine learning developers, programmers, and cybersecurity experts can assist with developing custom algorithms, modernizing analytics programs, advising on regulatory issues (e.g. SOTIF, FDA software validation, ISO 90003/25000), and helping to evaluate intellectual property. Or just blow stuff up.
Talks | DevOps & Management | Machine Learning | Beginner-Intermediate
In the world of data operationalization, the ability to go from data to deployment quickly is paramount. In this session, Keith Moore, Director of Product Management at SparkCognition, covers how the understanding of your data prior to model building can be accelerated, how neural architecture search works, and why deploying models doesn’t need to be as hard as it is today. This session will discuss why common neural network architectures may work well for known, established data problems, but fall short when modern machine learning applications demand more performance and higher levels of sophistication. He will take you through the journey their team faced on productizing better models, explain what most data-driven organizations really care about, and share some of the technology problems that are yet to be solved…more details
Keith Moore is the Director of Product Management at SparkCognition and is responsible for the development of the IoT product line (SparkPredict®). He specializes in applying advanced data science and natural language processing algorithms to complex data sets.
Moore previously worked for National Instruments as an analog-to-digital converter and vibration software product manager. Prior to that, he developed client software solutions for major oil and gas, aerospace, and semiconductor organizations.
Moore has served as a board member of Pi Kappa Phi fraternity, and still serves volunteers on the alumni engagement committee. He graduated from the University of Tennessee with a B.A. in mechanical engineering.