DigitalDialog / 12. April 2022, 15:00 - 16:00
Tackling Unwanted Bias in Artificial Intelligence
Insights into the IBM Open Source Toolkit “AI Fairness 360”
Biases in Artificial Intelligence (AI) can be defined as an anomaly in the output of machine learning algorithms contributing to discrimination against certain groups of people. Participants of the event will learn about bias in AI and how to combat it. Kush Varshney, Distinguished Research Staff Member at IBM, presents the free anti-bias tool “AI Fairness 360” and joins us to discuss bias in AI.
Kush R. Varshney, researcher and manager of IBM Research in New York, conducts research in this area and has developed the open-source software toolkit “AI Fairness 360 Toolkit (AIF360)” with his team, which can help to detect and remove biases in machine learning models. During a talk of about half an hour and a discussion following, he will provide insight into the extent to which “AI Fairness 360” can detect and mitigate discrimination and biases in artificial intelligence.
The event will cover the topics:
- Bias in Research and Development, Bias in AI
- IBM Open Source Toolkit “AI Fairness 360”
The panel discussion will take place as part of the block seminar “Hacking Innovation Bias” at TU Berlin, designed and moderated by Dr. Clemens Striebing (Fraunhofer IAO Senior Researcher) and Regina Sipos (TU Berlin Research Associate). Here, the students will address the extent to which gender stereotypes are reproduced in research and development processes in engineering and technology and what this means for the end products of these processes.
The event is intended for
Established and prospective scientists:in the field of AI and general engineering sciences and equal opportunity officers.