How can we prevent artificial intelligence from discriminating against people? It is a question that has the multinational corporation Microsoft concerned. Fraunhofer IAO is helping to develop solutions.
When people ostracize others or discriminate against them, it provokes enormous fury. However, Andre Hansel, is aware of another form of discrimination that, as a result of his profession, makes him almost more furious: the unequal treatment of human beings that comes from using systems driven by artificial intelligence (AI). “All over the world, software with the ability to learn and make complex decisions is on the rise in ever more aspects of our daily life,” explains Hansel, Program and Operations Manager at Microsoft in Berlin. “It is not acceptable for certain groups of people to be discriminated against due to the use of an AI that has not been adequately programmed or has been ‘trained’ in a biased way.”
For an example, he points to media reports of cases where facial recognition software could only properly match the faces of white people. Likewise, it would be unacceptable if an AI system at a bank would not grant loans to certain applicants, despite their having sufficient solvency. That is why Microsoft has developed a catalog of six criteria for Responsible AI and is also promoting responsible approaches to AI worldwide. For the global software market leader, it is important to let its target groups and political decision-makers participate in these findings, to discuss them and to develop them further together – right in the German capital of Berlin too.
“We wanted to use our representative office in Unter den Linden to create a pleasant and transparent place where people from the worlds of business, administration and politics can come and talk to us about Responsible AI,” says Hansel. “We have also brought experts from Fraunhofer IAO on board, because with their Center for Responsible Research and Innovation (CeRRI), they are specialists in responsibly shaping the digital transformation.” He also points out that while Microsoft is in many ways solely oriented towards technology, CeRRI possesses additional expertise in areas such as ethics, organizational development and communication design – and especially in the realm of Responsible AI.
As Jakob Häußermann, Project Manager on the Fraunhofer IAO side, recalls: “The idea was to create a physical space for exchange, a kind of laboratory where we could grapple with, discuss and further develop these seemingly very abstract topics.” This, he says, could help them to better comprehend relationships and collaboratively develop solutions. When it came to choosing the ideal location for the Responsible AI Learning Lab, their focus quickly narrowed to a particular area of the Microsoft office in Berlin, which could be seen from the street and could function as a lounge. There, they could not only familiarize various target groups from industry, administration and politics with the importance of Responsible AI, but also collaborate with them in developing concrete approaches for their respective organizations.
Fraunhofer IAO then developed the concept for a workshop on Responsible AI, where Microsoft Berlin’s customers and discussion partners can come to terms with the topic themselves – with moderation and guidance by Fraunhofer IAO experts and focusing on questions such as: Why is Responsible AI necessary for our organization, and society as a whole? What can we do at the development stage to prevent software from behaving in an unintentionally discriminatory manner later on? What values and principles should apply here, and how can they be put into practice?
Rather than relying solely on virtual formats and touch screens, the interior designers deliberately set out to make the open and friendly space of the Responsible AI Lab “touchable.” They developed movable boards where participants could stick up preprepared, relevant content and documents, and order and rearrange them in a flexible way. “Our goal is to work with the participants to determine what significance Responsible AI should have in their organization and jointly develop concrete approaches for implementing it,” says Häußermann. "It was important to us to not only focus on technical aspects, but also to take into account strategies and measures relating to governance and organizational culture." The only bump in the road so far has been that the coronavirus and the corresponding social-distancing measures have thwarted their plan to hold workshops in a physical space for the time being. But they will make up for that yet – “and until then” says Hansel, “we will flexibly convert the concept into equally exciting online formats.”
“As the world’s leading software developer, Microsoft is directly affected by questions of ethics and responsibility in the deployment of artificial intelligence. It is unacceptable for groups of people to be discriminated against due to the decisions of software systems, or for them to suffer disadvantages due to shared group features. Microsoft has been grappling with this question for a long time and has developed six principles to ensure that artificial intelligence (AI) can be developed responsibly from the outset. Among these principles are fairness, inclusion and transparency. At our office in Berlin, we now offer the opportunity to work together with our potential customers and participants from politics and administration in establishing how the principles of Responsible AI can be implemented in organizations. Workshops in the Responsible AI Learning Lab rely on haptic, i.e. sensory, techniques to support knowledge acquisition and participation. This concept was developed for us by Fraunhofer IAO, which also moderates the workshops with great skill. We have transitioned the workshops to an online format for now, while the pandemic is preventing these workshops from taking place in person. We are looking forward to the wide interest that this will generate in business, administration and politics!”