How AI Can Help Understand Bias

What is Bias?

Bias is a multifaceted concept and to have an honest discussion, we first need to establish what we mean by bias. If you look up bias on Wikipedia, you will find several related meanings.sup>1 Here is the opening paragraph of the article:

“Bias is disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error. Statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average.”

In a sense, being biased simply means being systematically wrong in one way or another. However, usually there is an additional dimension of ethics, that is, being wrong for the wrong reasons. Let's look at several examples and discuss in more detail.

Examples of Bias

Confirmation Bias

The first example is confirmation bias, which is the tendency that people look for evidence to confirm their beliefs. When seeking truth, it is much more productive to look for things that disprove your beliefs, which is how science works. Unfortunately, we did not evolve as truth seekers. We have some baggage from when we were inhabiting the savanna and priorities were different. While confirmation bias distorts our view of reality, it is usually not on purpose or with a nefarious intention. In addition, once you know about it, you can learn to override your innate tendencies.

Gender Bias – Simpson's Paradox

The second example is the UC Berkeley gender bias case, which does have an ethical component and shows that one has to be careful in thinking about (possible) bias. In the statistics literature this is also known as a famous example of Simpson's Paradox.2

In the fall of 1973, 44% of men were admitted and only 35% of women to UC Berkeley. Is this an example of gender bias in favor of men? The answer may surprise you if you have never seen this case before.

The answer is no, and, in fact, it can be shown that, counterintuitively, there was actually a small but significant bias in favor of women. How can this be possible? The explanation is that women applied to more competitive departments with lower rates of admission. Tangentially related, but worth mentioning, is the idea that equality of opportunity is not the same as equality of outcome, and one cannot naively look at outcomes and attribute it to bias without investigating further.

Gender Bias

The third example is related to the previous one and was in the news not too long ago, namely the Apple Card gender bias.3 In the previous example, gender was explicitly part of the analysis (at least eventually). In the Apple Card example, gender was left out of the model completely and it was claimed that this implied there could be no bias. Then the card was giving higher lines of credit to men than women, even in cases where everything else was equal, such as for a married couple.

The problem with the approach that was taken, as it turns out, is that other variables could serve as a proxy for gender, and if gender itself is not included, there is no way to check for bias.

Again, we can see that bias is a subtle concept and there are no straightforward answers.

Statistical Bias

The fourth example is statistical bias. Suppose one builds an admission model on first-time freshmen data, which is then used to predict likelihoods of enrollment for potential new first-time freshmen and also likelihoods for transfer students. The former is perfectly fine but predicting on transfers will bias the predictions because transfers are generally more likely to enroll.

Obviously, we at Othot guard against this and similar types of bias by comparing predict data distributions to train data distributions and large deviations that need to be explained.

How to Address Bias in Models

As should be clear by now, if one wants to assess whether there is gender, race, or any other type of bias it is not a good idea to discard these variables from the data altogether, as it becomes impossible to detect potential bias. However, we should discuss in some more detail how we deal with these variables when building models.

In general, when building models, we use all the available data, but our algorithms may ignore data that is not useful. This includes variables like gender, race, and other variables for which preventing bias is important.

What comes next is critical.

After building the models, we can check whether the predictions exhibit any bias. For example, we can compare the average likelihood of enrollment between men and women and see if there is a difference. Of course, even if there is a difference, that does not mean that the predictions are necessarily biased, but how do you know?

The answer is by using explainable AI.4

The Othot Platform has a feature that can explain why one group has a higher prediction than another group on average. For example, for the UC Berkeley case, it will show that the department (assuming it is in the data) has, relatively speaking, a more positive impact on the likelihood of enrollment for men than for women (or, conversely, a more negative impact on women).

It should be pointed out that there may be legitimate scenarios where gender is part of the explanation, but this should be examined on a case-by-case basis. For example, initiatives to encourage more women in STEM could make gender an important variable in admissions and models would pick up on that.

Finally, it is possible that the explainable AI is showing gender (or race, etc.) as a factor without a logical explanation. One possibility is that somehow the process is really biased, and the model is simply reflecting that. This knowledge can be used to improve the processes.

Perhaps more likely, it is also possible that important variables are still missing from the data and thus the model could not use them. This could have been the scenario for UC Berkeley. It should have raised questions but the only realistic solution to this problem would have been to collect more data (which is always a good idea).

A Few Final Thoughts

In an ideal world, gender, race, etc. should be selected by models only in rare cases (like the STEM example discussed above). This does not mean that these variables should not be used as inputs at all as it can hide bias, as the Apple Card example showed. Even if there seems to be bias, there may be additional data that can be collected to show that is not actually true, as the UC Berkeley case showed. Mitigating and eliminating bias is an active journey that can be challenging, but with the right data and models it is a solvable problem.

 
Sources:
1 https://en.wikipedia.org/wiki/Bias
2 https://en.wikipedia.org/wiki/Simpson%27s_paradox
3 https://www.wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/
4 More about this in a future blog post.