Chris Hazard, PhD

Chris Hazard, PhD

CTO and Co-founder at Howso

    Dr. Chris Hazard is co-founder and CTO of Howso. Howso’s understandable and privacy enhancing AI spun out of Hazardous Software, a company Chris founded in 2007 that focuses on decision support, visualization, and simulation for hard strategy problems in large organizations, DoD, and government. Chris holds a PhD in computer science from NC State, with focus on artificial intelligence for trust and reputation. He was a software architect of CDMA infrastructure at Motorola, worked on robot coordination and logistics at Kiva Systems (now Amazon Robotics), is an active member of the CompTIA AI Advisory Council, and advised NATO on cyber security policies. He has led simulation, serious gaming, and software projects related to cyber security, social engineering, logistics, economics, and psychology, and is a certified hypnotist. Dr. Hazard is also known for his 2011 game Achron, which won GameSpot’s Best Original Game Mechanic award, and for his research on AI/ML, privacy, game design, and human-computer interaction, for which he has given keynote speeches at major conferences and been featured in mainstream media.

    All Sessions by Chris Hazard, PhD

    Day 2 04/24/2024
    3:30 pm - 4:00 pm

    How to Preserve Exact Attribution through Inference in AI: Get the Correct Explanations and Preserve Privacy via Instance-Based Learning

    <span class="etn-schedule-location"> <span class="firstfocus">ML Safety & Security</span> </span>

    Most forms of machine learning explainability are ex-post; they attempt to create an approximate model of a model in order to try to understand why a prediction was made. For data scientists working with AI models today, that won’t cut it. There is an increasing need for full data transparency and explainability to mitigate against bias, incorrect information, and hallucinations — as well as increasing demands for privacy. In this session, hear from noted computer scientist and AI expert, and founder of a leading explainable AI company, Dr. Chris Hazard. He will show data practitioners how to leverage cutting-edge instance-based learning (IBL) to solve these problems. Most AI today is black box. IBL offers a fully explainable AI alternative, having a precise on/off switch for data provenance and lineage through inference. With IBL, the derivation of each inference can be easily understood from the data. Having worked with IBL for over a decade, Chris will explain how modern IBL techniques, built around information theory, have modern model performance characteristics. He will also show how IBL techniques have extremely strong robustness to adversarial attacks and are automatically calibrated. Attendees will learn how the same mechanisms that yield this performance are closely related to differentially private mechanisms, and how to deploy them to generate strongly private synthetic data at scale. Hearing practical examples, attendees will learn why attribution through inference is vitally important for data-centric AI, how to debug data and understand outcomes, and how to protect privacy and anonymity when it matters.

    Open Data Science

     

     

     

    Open Data Science
    One Broadway
    Cambridge, MA 02142
    info@odsc.com

    Privacy Settings
    We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
    Youtube
    Consent to display content from - Youtube
    Vimeo
    Consent to display content from - Vimeo
    Google Maps
    Consent to display content from - Google