Microsoft has developed a new set of tools to determine if AI algorithms are biased. The AI algorithm may discriminate against certain groups of people. Microsoft hopes that the new tools can help companies discover and use AI safely.
According to Rich Caruna, director of Microsoft’s Predation Detection Research, the new tool is equivalent to a “dashboard” that engineers can use to train AI models. He also stated that in the AI field, due to lack of experience, there are few people who understand something new, such as transparency, understandability, and interpretability; and there are also prejudices that may arise at any time. These are elements that must be taken seriously.
Caruana also said: “Things like transparency, intelligibility, and explanation are new enough to the field that few of us have sufficient experience to know everything we should look for and all the ways that bias might lurk in our models. Of course, we can’t expect perfection—there’s always going to be some bias undetected or that can’t be eliminated—the goal is to do as well as we can. The most important thing companies can do right now is educate their workforce so that they’re aware of the myriad ways in which bias can arise and manifest itself and create tools to make models easier to understand and bias easier to detect.”