Bias is an intrinsic part of human decision-making. Whether it’s intentional or unintentional, there is always an element of subjectivity in our choices. Therefore, machine learning (ML) models, which are ultimately designed and trained by humans, are also prone to biases.
When deployed responsibly, ML models have the potential to identify and reduce human biases. On the flip side, when biases are baked into an ML model, it can perpetuate and amplify them. For example, when U.S. courts began using an AI tool for case management and decision support, the system inaccurately predicted that black defendants posed a higher risk of being repeat offenders than they actually were.
So, the question is: how do we minimize bias in machine learning models and ensure fair outcomes? Let’s dive into this blog post to find the answers!
What is Bias in Machine Learning?
In machine learning, bias is a phenomenon that skews the result of an algorithm in favor of or against a group of subjects that share several traits. These results are produced by incorrect assumptions in the machine learning process – oftentimes due to faulty, poor, or biased data. Let us explore a few different kinds of biases:
Sampling Bias
Machine learning models are trained on a large amount of historical data to make predictions. When the sample data fed into the machine learning model is itself tainted, it can lead to erroneous outputs. For instance, take the case of a hiring algorithm Amazon developed in 2014 to screen resumes. This algorithm was trained on resumes from the pool of existing software engineers at Amazon, which was overwhelmingly male. As a result of this sampling bias, the hiring tool systematically discriminated against female applicants.
Protected Attribute Proxies
Protected attributes are personal features, such as race, color, age, and sexual orientation, which are protected by law and cannot form the basis of making a decision. However, ML models can inadvertently cause discrimination by using proxies that correlate with these protected attributes. For example, imagine building a credit decision model for a country where people of one race live in the northern part and people of another race live in the southern part. Now, even though race isn’t explicitly fed into the model, it might still discriminate based on race simply by virtue of where a person lives.
Selection Bias
Sometimes enough representative data isn’t collected while training an ML model, which creates an incomplete picture of a real-world scenario. Due to this selection bias, the ML algorithm is likely to perform well with the specific data set it has been trained on but quite poorly with a more general data set. For example, consider an ML model that has been trained on high net worth individuals to predict the attrition rate of customers from a bank. As a result of this selection bias, the algorithm will probably produce bad results for lower net worth individuals, who have substantially different characteristics than high net worth individuals.
Confirmation Bias
Did you ever perform a science experiment in school or college, where you intentionally manipulated the process to find an answer you were seeking? This is confirmation bias at play. So, when people training an ML model are looking to confirm a pre-existing hypothesis, they might tend to give more weight to information that supports their hypothesis while downplaying evidence that contradicts it. This often occurs when people are pressured to get an answer before even seeing what results the actual data produces.
Measurement Bias
Measurement bias arises when the training data is collected or gathered in different ways. This difference can introduce systematic errors and inaccuracies into the ML model. A classic example of measurement bias is facial recognition algorithms. If images are collected by different types of cameras, there will be inconsistencies in the training data. Or, if the images for certain training data sets are captured in higher resolution than other sets, measurement bias can again creep in.
Biases in machine learning models could be deliberately added to skew the results, or could just inadvertently creep in. In any case, it’s important to be mindful of these and develop processes to tackle them. Let’s learn more about this in the next section.
Fairness in Machine Learning Models
The concept of fairness is difficult to capture since we first have to debate what decisions are fair and what aren’t. For instance, if you’re at a party that’s serving pizza, how do you fairly distribute the pizza slices? It’s probably fair if everyone at the party gets an equal number of slices. But don’t you think the people who made the pizza should get more slices since they put in more effort than the others? Or maybe it’s fair to give more pizza slices to the hungrier people at the party? This subjectivity in what’s fair is precisely why fairness can be difficult to define. And that is why it is as much a business problem as it is a technical one.
One way to define fairness in machine learning would be an attempt to correct the algorithmic bias in ML models arising from the way training data is sampled, aggregated, labeled, and processed. Some of the criteria used to make ML models fairer are:
- Anti-classification: This is a simple fairness criterion that states that a model should not use protected attributes while making predictions. But such a model is still susceptible to bias due to proxies, and hence this is a weak fairness criterion.
- Group fairness: Group fairness requires that the probability of a model predicting a positive outcome should be similar for subpopulations distinguished by protected attributes.
- Equalized odds: As per this criterion, an ML model’s predictions must have the same true positive and false positive rates among different subgroups. That means that the rate at which the model makes mistakes should be equal across different subgroups.
Apart from setting up fairness criteria, we can use multiple strategies to improve fairness at different stages of the machine learning pipeline. For instance, with model evaluation and auditing, we periodically evaluate a model’s fairness and bias in data. If there’s any suspicion or evidence of unwanted bias, we can investigate further and debug the source of the bias. Some other strategies include model inference, model training, and feature engineering.
Tackling Bias in Machine Learning Models
To tackle bias in machine learning, you need to take a two-pronged approach, centered around people and technology. People must be aware of historical contexts where AI is prone to unfair bias, like in some of the examples we discussed above. You should also outline situations where automated decision-making is fine and situations where you need human feedback along with algorithmic recommendations.
Next, you need to put the right technical tools in place to mitigate bias. Empowered Systems Connected Risk employs techniques, practices, and behaviors that will identify, measure, and mitigate model risks – including machine learning bias. With Empowered Systems Connected Risk, you can perform automated model testing, monitoring, and validation that allows you to conduct more frequent and comprehensive tests to consciously mitigate bias.
Conclusion
Bias in machine learning is a critical issue that can have far-reaching implications if not addressed properly. By understanding the different types of biases and implementing strategies to ensure fairness, we can create more reliable and equitable ML models. Empowered Systems Connected Risk provides the necessary tools to help organizations manage and mitigate bias in their machine learning models, promoting fairness and accuracy in AI-driven decision-making.