DATA DISCRIMINATION

focus on privileged individual

The *isms in the Algorithms

The promise of a more equitable society always seems just around the corner – right? We are led to believe that leadership opportunities for women will soon be abundant, Black people won’t suffer from discriminatory police practices and “Mohammed” won’t be rejected from the interview process before it even begins. 

However, the reality is starkly different. Technological advancements seem set to perpetuate centuries-old discrimination, embedding exclusionary practices into the digital fabric of our world. Bias is being entrenched within everyday algorithms. What can be done to disrupt these systems? How do we identify and successfully address their perpetuation? How do we forge a transparent path forward towards the elusive desires of equality, inclusivity and justice?

Historical Context

Discrimination and bias are not new phenomena; they have been woven into the fabric of society for many centuries. Historically, systems of privilege have consistently favoured certain social groups, creating entrenched inequalities. These systems have manifested through various forms, such as slavery, segregation and institutionalised racism, sexism and classism. Social hierarchies were maintained by policies and practices that marginalised and disenfranchised minorities, women and other vulnerable groups and populations.

Laws and societal norms were often designed to maintain these inequalities. For example, Jim Crow laws in the United States enforced racial segregation, while women worldwide were denied the right to vote or work in certain professions until well into the 20th century. These historical injustices have left deep scars and established patterns of bias that continue to affect contemporary society. Understanding this context is crucial for addressing how these biases have transitioned into the digital age.

Ethical Imperatives

It has to be considered an ethical imperative to address bias and discrimination. At its core, discrimination is a violation of fundamental human rights. It undermines principles of equality, inclusivity and justice that are supposed to be the foundation of modern democratic societies. An ethical stance would demand that we confront and rectify these injustices, ensuring that we strive toward a society where every person is treated equitably with dignity and fairness.

Ethical frameworks exist, they provide a moral basis for addressing bias in all its forms, but the divide between those advocating these practices and the technology companies implementing discriminatory systems appears to be quite significant.

The Digital Age and New Risks

Our transition towards a more digital society has already shown the risks of creating systems that perpetuate all forms of discrimination. The biases embedded in historical data are now being encoded into algorithms that influence crucial aspects of our lives. These algorithms process data without context, often magnifying existing biases and creating new forms of discrimination. The influence of these technologies is vast, and the potential for harm is significant.

Rep. Alexandria Ocasio-Cortez, speaking with Ta-Nehisi Coates at the annual MLK Now event, stated that “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”(1)

Examples of Existing Algorithmic Discrimination

  • Facial Recognition Technology: Despite its growing prevalence and potential utility, this technology embeds and perpetuates significant discriminatory biases. High-profile cases, such as the wrongful arrests of Robert Williams and Porcha Woodruff in Detroit, highlight the severe consequences of misidentification for Black individuals. Research by MIT’s Joy Buolamwini has revealed that facial recognition systems from major tech companies like IBM, Microsoft, and Amazon exhibit markedly higher error rates for darker-skinned individuals, particularly women, compared to their lighter-skinned counterparts. These biases have prompted cities like San Francisco to ban the use of facial recognition by city agencies. This reflects widespread concerns about civil liberties and privacy while underscoring the broader issue of racial and gender bias in algorithmic systems.
  • Predictive Analytics in Policing and Sentencing: Algorithmic systems used in predictive policing and sentencing worldwide have also come under scrutiny for perpetuating discrimination and biases. In the United Kingdom, the Durham Constabulary’s Harm Assessment Risk Tool (HART) has been found to reflect biases inherent in historical police data, exacerbating disparities in fairness of policing practices. In the Netherlands, the SyRI system faced legal challenges for unfairly targeting low-income and immigrant communities for welfare fraud investigations. China’s Social Credit System raises concerns over its opaque algorithms, which penalise ethnic minorities and dissenting voices. Similarly, in Australia and Canada, predictive policing tools have been criticised for excessively targeting Indigenous and minority communities, further highlighting entrenched patterns of discrimination. In the United States, the COMPAS algorithm, employed to assess recidivism risk, has been criticised for disproportionately labelling African-American defendants as higher risk compared to their white counterparts, resulting in harsher sentences and bail conditions.
  • Social Media and Advertising: Social media platforms like Facebook have been criticised for enabling discriminatory practices in advertising. In one notable instance, Facebook’s ad targeting capabilities allowed advertisers to exclude specific racial groups from seeing housing, employment and credit ads, a marketing strategy that violates anti-discrimination laws. Despite efforts to address these issues, ongoing reports indicated persistent failures in preventing discriminatory ad targeting practices which consequently reinforce systemic inequalities.
  • Credit and Loan Approvals: Algorithmic discrimination in credit and loan approvals is a global issue, affecting various regions. In the UK, the Financial Conduct Authority found that credit scoring algorithms could unintentionally discriminate against minority groups by perpetuating historical biases. In Australia, similar concerns were raised regarding big data and machine learning models which disadvantaged Indigenous Australians. In India, microfinance algorithms have shown gender bias, often favouring male applicants over females. Across Europe, credit scoring algorithms used by banks have been reported to exhibit ethnic biases, replicating discriminatory lending practices found in historical data. Also in Brazil, automated credit scoring systems have been found to unfairly reject loan applications from Black and mixed-race individuals. 
  • Hiring Practices: Algorithmic discrimination in hiring practices is a widespread issue globally. In the UK, algorithms have been shown to discriminate against candidates with ethnic-sounding names, while in Australia, AI-driven platforms often favour male candidates for technical and leadership roles due to gender biases in the training data. In Germany, automated hiring tools tend to prefer younger, male candidates, reflecting age and gender prejudices. Similarly, in Japan, recruitment systems have been biassed against foreign applicants, favouring domestic candidates. Even global tech companies like Amazon have faced scrutiny, with their trial recruitment algorithm discriminating against female candidates. 
  • Resume Screening and Disabilities: University of Washington graduate student Kate Glazko noticed recruiters using AI tools like ChatGPT to summarise resumes and rank candidates. This UW study found that ChatGPT consistently ranked resumes with disability-related honours and credentials lower than identical resumes without these honours. However, customising the tool with instructions to avoid ableism reduced this bias for most disabilities tested.

The pervasive influence of algorithms necessitates a concerted effort to disrupt and reform these systems. Addressing the ethical dimensions and societal impacts of data discrimination is crucial. Emphasising the intersectionality of inclusivity and equity is essential to ensure the health of a democracy and prevent society from blindly continuing to reward the privileged few.

(1) Yes, artificial intelligence can be racist

REFERENCES CONSULTED AND USEFUL SOURCES:

Rolling Stone: These Women Warned Us About AI
Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?
Cornell Chronicle: Social scientists take on data-driven discrimination
LSE: Data-driven discrimination: a new challenge for civil society
Data Discrimination: The Dark Side of Big Data
VOX: Algorithms and bias, explained
Washington University: ChatGPT is biased against resumes with credentials that imply a disability

Leave a Reply