I was deeply honored to be invited to speak at this year’s AI Experience Worldwide, especially since I got to address a topic that I believe in strongly: the need for trust in AI.
I believe we’re at a crossroads, and as AI enters the mainstream, it is imperative that it upholds the public’s trust. We need to create responsible AI systems that are beneficial for everyone.
In my session, Bringing Trust into AI, we investigated the root cause of bias in machine learning and how we can make suggestions to address it. At every stage of development, built-in guardrails ensure your AI system is ethical and aligns with organizational values. We also look at how tools such as the ones provided by DataRobot help users identify bias and take action.
Many other sessions—especially those focused on core data science activities—touched on the need to trust your AI and gave suggestions for making your systems more reliable.
We also discussed how we are moving from AutoML to Composable ML, and my colleague Sylvain Ferrandiz examines how machine learning systems can fail and what you can do to prevent such breakdowns. In this session we focus on how to compose a system that’s fully under your control but also adaptable.
Demand Forecasting with DataRobot demonstrates how extraordinary events (like a global pandemic) can skew data sets and cause traditional approaches to demand forecasting to falter. My colleague Jarred Bultema outlines ways DataRobot’s automated suite of tools can improve predictions and account for disruptive events that can impact performance and introduce unintended bias into your models.
Lastly, in the session Improving Your Model after Deployment with AutoML, my colleague Tristan Spaulding explains how unexpected conditions can degrade performance unless your model is constantly refreshed with live data. He then teaches you how build and evaluate challenger models in DataRobot to improve model accuracy overall accuracy and keep ML at peak performance
For businesses to benefit from machine learning and see transformational growth, it is essential they ensure their AI systems reflect their principals and values. At DataRobot, trusted AI underlines everything we do. If you are curious, sign up for a personalized demo to learn more.
About the author
Haniyeh Mahmoudian
Global AI Ethicist, DataRobot
Haniyeh is a Global AI Ethicist at the DataRobot Trusted AI team and a member of the National AI Advisory Committee (NAIAC). Her research focuses on bias, privacy, robustness and stability, and ethics in AI and Machine Learning. She has a demonstrated history of implementing ML and AI in a variety of industries and initiated the incorporation of bias and fairness feature into DataRobot product. She is a thought leader in the area of AI bias and ethical AI. Haniyeh holds a PhD in Astronomy and Astrophysics from the Rheinische Friedrich-Wilhelms-Universität Bonn.
Meet Haniyeh Mahmoudian