Gender Inequality with AI

Aniket Purkayastha
6 min readNov 25, 2021

Gender inequality and biases have been prevailing in the society for centuries. Be it at the workplace, fundamental rights, or acquiring financial aid, women have been fighting through every step and ounce of their life against the discriminating stereotypes and injustice done to them. As more and more women are joining the workforce, we are faced with the challenges of wage disparity, lesser female representation, and many other intensely rooted discriminating ideologies. The evilness of gender hierarchy has had a long-lasting influence in our society, yet we humans are the ones who have created such inequalities and hierarchies. With the development and advancement of technology, Artificial Intelligence has become a one-stop solution for almost every kind of problem and is on the verge of taking over a majority part of our lives, with the hope of addressing serious issues like gender biases. But the irony is, even the decisions made by artificial intelligence are not perfect and contain faults, some of them being influenced by gender biases as well. Therefore, it becomes essential to understand, what, gender inequality/bias in Artificial Intelligence is.

How does Gender Inequality influence AI Decisions?

Artificial Intelligence is still in its embryonic stage of development, with millions of scientists and researchers working up their sweat to improve its decision-making abilities and make them efficient to the best of its functionalities. Meanwhile, Artificial Intelligence continues to have a considerable amount of impact on human lives in a variety of ways. Even if not in the near future, AI is poised to become extremely essential in all aspects of human lives with a much greater impact. Therefore, it becomes more and more crucial to address the limitations of Artificial Intelligence that may prove to be extremely disastrous in the coming days with AI being more active, if not solved and addressed in time.

Before jumping directly on to the reasons for gender inequality being one of the influencing factors in AI-based decisions, we need to understand how AI works. Most Artificial Intelligence is built and works upon a set of rules that are popularly known as algorithms. These algorithms are developed by coders and developers. In the process of developing an algorithm, the first thing that we need is the training dataset. These datasets contain thousands of instances. Each instance consists of a variety of features along with a label/output column. For example, a resume shortlisting training dataset, with thousands of candidate records. Here, the features can be age, sex, GPA, education, skills, hobbies, and so on. The label/output field will display whether the candidate was selected or rejected. The next method is feature extraction, where the developers basically, filter out the most suitable feature in order to train the AI system. All these features are then fed into the AI system, where the system extracts all the features and train itself on the basis of the output values. Inside the system is what we call as the key machine learning algorithm. This algorithm can take the form of various models such as neural network, logistic regression, random forest classifier, decision tree, support vector machine, and many more. All these models have their own set of rules and techniques for training the Artificial Intelligence system. In reality, the developers may know, what happens inside these models theoretically, but even they don’t have a clue why their system makes the choices it makes. They just know that in the end, the system gives an output with a pretty high degree of confidence. Here, the basic idea is that, based on the original training data, the AI system figures out what output the humans are desiring for. This is the training stage of an AI system. In the next step, a completely new set of data that has not been vetted and pre-processed by humans is fed into the AI system. Now the AI system uses what it learned from the original vetted training dataset and applies it to these new datasets. Hope is that this system will be way faster than the humans will ever be in predicting the most accurate and precise output. But in all of these, the question arises, when and where the gender bias sneaks in?

Remember how an AI system is taught and gets trained on a large, pre-processed training dataset during its training stage. Here the fact is, these datasets were previously vetted and pre-processed by none other than but humans. Those humans were the ones, who ultimately made the decisions that were fed for the training of the AI systems. The irony is, the very reason why artificial intelligence was brought into existence, itself became the fundamental foundation on which these AI systems were laid on. Even though gender was never explicitly fed into these AI systems, they basically learned to prefer male candidates just like the humans did before them. After all the whole point of these systems is to make them act like the very humans they are designed to replace. And if those humans had biases against women, then so will the algorithms that mimic them.

Case Study of Amazon’s Gender Biased AI Recruiting Tool

Today, Amazon is one of the most successful tech giants in the world. Amazon is known for its extensive focus on Artificial Intelligence for various functionalities, such as improving customer experience, recruiting employees, etc. Back in 2014, a team of developers started experimenting on an Artificial Intelligence-based recruiting tool. Basically, this system would crawl the internet and find potential candidates suitable for the company and then rate their CV on a scale of 1 to 5. But not so long after, in 2015, Amazon made headlines when they had to scrape off their AI recruiting tool with an immediate effect. But what went wrong? It did not take much time for the Amazon developers to realize that the very system, they were using to recruit and screen potential candidates were not working in a completely gender-neutral way. Turned out that the tool automatically started degrading the women's resumes and more and more male candidates were preferred. But how did this AI recruiting tool start making gender-biased decisions?

After a thorough investigation by its developers, a solid reason was found. While developing the AI system, the developers trained the system with the data of the past 10 years. The data was based on the CVs submitted by the candidates for over a period of 10 years. It was found that most of the candidates who applied for a job in Amazon over that period were male candidates. A direct result of male supremacy in the tech industry. Now since the very foundation on which this AI tool was built upon was itself biased, the machine learning model eventually learned on its own to prefer the male candidates over the female candidates, hence, giving away for gender biases to have its influence on the decision making.

Since then the other companies have been more cautious and prudent when building their AI-based recruiting and resume analysis tools.

How can we overcome Gender Inequality in AI?

Before jumping on to the solutions, we need to understand that Artificial Intelligence is still in the embryonic stage of development. With the development and advancement of technology, there is still plenty to come and plenty to be experimented upon. Therefore, any of the solutions provided to overcome gender biases in AI cannot guarantee 100% of success. A few, essential practices that developers should keep in mind before developing an AI tool are:-

  • The proportion of male and female data must be roughly equal in the training dataset. There must be a considerable amount of diversity in the training dataset.
  • The developing team must thoroughly evaluate and analyze the accuracy scores and levels for a variety of different categories. In case if any one of the categories is preferred discriminately more or less then it must be addressed with utmost cautiousness.
  • If any of the groups is subjected to be more sensitive towards discrimination, then the developer must include a large amount of data from that particular group in the training dataset. For example, in the case of gender biases, the inclusion of a considerable amount of training data for women might ensure an unbiased result.

Gender Inequality in AI is a vast and deep-rooted problem. Hoping to completely eliminate this problem by just following the above-mentioned practices would still leave us with plenty of unanswered questions. But yes, by following these rules we can definitely ensure an AI model that could give us the best possible unbiased decisions.

--

--