How to Build a Trusted AI System


With the advancement in AI capabilities, especially its use in industry, companies need to ensure their AI is robust, effective, reliable, and used ethically. Moreover, AI systems should be responsible, fair, and have governance and accountability embedded in their processes. Thus, to trust an AI system, it is crucial to have an AI trust framework to understand and govern AI behavior. In this presentation, I discuss DataRobot’s Trusted AI framework and what are the three components required to engender Trust and Ethics in AI systems. Our goal is not only to provide the theoretical understanding of trust and ethics but a practical plan that can be implemented in any infrastructure right away.


Haniyeh is the global AI Ethicist at DataRobot's Trusted AI Center of Excellence. She leads a team of Applied AI Ethicists that provide actionable and trusted technical resources to the customers. Her research focuses on bias, Trust, and Ethics in AI and ML. Haniyeh Holds a PhD in Astronomy and Astrophysics from Bonn University and was recently awarded BentureBeat's Women in AI award for responsibility and Ethics in AI and was named by Forbes as one of the AI Ethics Leaders.

Open Data Science




Open Data Science
One Broadway
Cambridge, MA 02142

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google