This research study on Biasness in Artificial Intelligence (AI) is by PG in Risk Management students of batch Jan’22-23 and July’21-22. Prior to opting for a career in risk management, Rahul and Shobhit graduated with a B.Com degree, Tirtharaj with a BBA degree, and Sri Sruthi with a B.Tech degree. None of them are from the ‘Risk’ background but all of them have a mutual goal and that is to build a successful career in risk management in India. Doing a professional course after graduation not only enhances and uplifts your career but also places you in a higher position in the hierarchy. Currently, Rahul and Tirtharaj are still pursuing the risk management course by GRMI, and Shobhit and Sri Sruthi are working at EY (Ernst & Young).
Here’s the research study by them:
Biasness in AI (Artificial Intelligence)
By
Rahul Gupta & Tirtharaj Sanyal, PGDRM Jan’22-23
Shobhit Khanna & Sri Sruthi Konda, PGDRM July’21-22
How can a machine be biased?
- Bias refers to prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair.
- Since a machine cannot think on its own, human bias can creep into it through Data Collection, programming, selection, and interaction.
- As the world is filled with bias, any data we collect from it contains biases and we feed our model on that data which will reflect in machines.
- AI cannot be relied upon for making moral decisions where it carries the baggage of ethical dimensions.
‘We’ are biased!
Why did we not say red watermelon when we saw this original image?
Well, when we see an image like this tendency is to just think of it as watermelon rather than red watermelon and that’s because of our own bias depending on geography.
AI-The Data Pipeline
AI Bias takes place when assumptions are made incorrectly about the dataset or the model during the machine learning process, which subsequently leads to unfair results
AI Bias Cycle
The Unwanted AI Biases
- The biases in data sourced from the web – Cultural, Historical, Aggregation, and Temporal.
- Cultural bias in addition to language and geography includes these aspects of humanity —Gender, Race, Economics, Age, Tribe, Education, and Religion.
- The sampling bias category by indicating the need to check for diversity in data sources and appropriate sampling —Sources, Sampling Method, and Size.
- The types of selection bias —Measurement, and Omitted Variable.
- The five key biases —Algorithmic, Popularity, Evaluation, Emergent, and Ranking
- User interaction bias is just that Social, Presentation, Observer, Linking, Behavioral, Cause-Effect, and Production
Gender & Occupational Bias
When a group of US and European researchers fed the pictures of 20 congress members to the Cloud image recognition service of Google cloud vision (GCV), the AI connected the women with appearance-related labels like “smile”, “skin”, and “beauty”, while the labels applied to men were “businessperson” and “official” is evidence that confirms gender and occupational bias.
Racial Bias
- In an experiment on Twitter images, Google Vision labeled an image of a dark-skinned individual holding a thermometer as a “gun”.
- while a similar image with a light-skinned individual was labeled “Monocular”.
- Thus, Google Vision AI results in racial discrimination.
The Judgement Algorithm – COMPAS
- COMPAS, an acronym for correctional Offender Management Profiling for Alternative Sanctions, is an AI tool used to predict recidivism risk —the risk that a criminal defendant will re-offend.
- These charts show that scores for white defendants were skewed at a low-risk score of 1.
- But the white defendants who were labeled as low risk has committed more crimes.
Algorithmic Biases
- An Algorithm bias is the lack of fairness that emerges from the output of a computer system.
- The lack of fairness described in algorithmic bias comes in various forms but can be summarized as the discrimination of one group based on a specific categorical distinction.
- The concepts of algorithmic bias point to this observation that neural networks and AI systems more broadly are susceptible to significant bias such as these biases can lead to very real and detrimental societal consequences.
- Indeed, today more than ever we are already seeing this manifesting in society from everything from facial recognition to medical decision making to voice recognition.
AI Bias in Healthcare
- An algorithm for neurological diseases that registers the way a person speaks and analyzes data to determine Alzheimer’s disease had >90% accuracy
- However, when non-English speakers took the test, it identify pauses and mispronunciations as indicators of the disease.
A recent study published in JAMA (the Journal of the American Medical Association), reviewed that most of the data used to train those AI algorithms came from just three states: California, New York, and Massachusetts.
Amazon Recruiting Algorithm
- Amazon in 2018, scrapped its recruiting algorithm after it found that it was identifying word patterns from the last 10 years’ data and the algorithm is trained on the majority of male resumes.
- Which favored male candidates over women candidates by disqualifying women’s resumes irrespective of skill set.
Approaches To Minimize AI Bias
Pre-processing approach:
- This method involves processing the data in advance to maintain as much fairness and accuracy as possible while minimizing any link between the results and protected characteristics.
- E.g.: Sensitive parameters like Age in getting medical treatment.
In-Processing approach:
- Reweighting features, Feeding accurate and equal amounts of data, and reducing adverse ability from its own predictions build a fair classifier.
- Using innovative training techniques like decoupled classifiers to minimize bias.
Post-processing approach:
- This method focuses on post-processing techniques that could transform the system’s skewed predictions until they meet satisfactory fairness levels.
- E.g.: Discrepancies regarding facial recognition tools help to minimize biases.
De-biasing Tools
Quantitative tools: There are several AI fairness tools meant to help engineers and data scientists examine, report, and mitigate discrimination and bias in ML models.
- IBM’s AI Fairness 360 Toolkit
- Google’s What-If Tool
- Microsoft’s fairlean.py
- Facebook’s “Fairness Flow.
Qualitative tools: Enable teams to envision the AI system and its role in society, explore potential fairness-related harms and trade-offs, outline how bias could occur, and prepare plans to mitigate biases.
- Co-designed AI fairness checklist (2020)
- Fairness Analytic (2019)
How it works?
Black Box To White Box Model
Need For AI Fairness
- According to, PwC survey finds that 85% of CEOs believe that AI will significantly change the way they do business in the next 5 years.
- The research estimates that AI could contribute $15.7 trillion to the global economy by 2030
- 76% of CEOs are most concerned with the potential for bias and lack of transparency when it comes to AI adoption
How To Adopt De-Biasing?
- Establish clear-cut processes to test and mitigate bias in every part of the development cycle.
- Be mindful of the contexts in which AI can absorb bias or scale it up. This includes sentiment analysis, content moderation, and intent recognition.
- Cross-functional teams like Legal, Ethical and Social should work together to mitigate AI biases.
- Include more female and colored members to build diverse data scientists’ teams.
- Combine multiple data sources to build data diversity and test on real-time data.
- Invest more in AI diversification and bias research
Get the full study here: Biasness In AI
Disclaimer
This report has been produced by students of Global Risk Management Institute for their own research, classroom discussions and general information purposes only. While care has been taken in gathering the data and preparing the report, the student’s or GRMI does not make any representations or warranties as to its accuracy or completeness and expressly excludes to the maximum extent permitted by law all those that might otherwise be implied. References to the information collected have been given where necessary.
GRMI or its students accepts no responsibility or liability for any loss or damage of any nature occasioned to any person as a result of acting or refraining from acting as a result of, or in reliance on, any statement, fact, figure or expression of opinion or belief contained in this report. This report does not constitute advice of any kind.