ARTIFICIAL INTELLIGENCE
🌎 What If "Fair" AI Isn't About Being Fair But About Who Gets to Define What Fair Means?

src:chatgpt

AI systems don't decide what is fair on their own; people give them that power.

Read that sentence again.

Algorithms gain influence from the people and departments that choose to implement them, authorize their use, and position them as "objective" or "fair."

I have been reading through groundbreaking research published in the Journal of Management Studies titled "Is There Fairness in AI?" The four-year ethnographic study reveals how AI reshapes what organizations consider "fair" rather than simply enforcing existing fairness principles.

Here's what happened: Imagine working at the same company for five years. Tax season after tax season, you exceed every performance target. Your manager watches you handle complex customer cases with precision. When the promotion opens up, he's certain you're the perfect fit.

"She's exactly what we need," he tells HR. "Five years of excellent performance. She speaks three languages and has the judgment for complex situations."

Then the AI interviewer rejects you in two weeks. No explanation. No human feedback. Just an algorithmic decision.

Not because you lacked qualifications. Because the machine couldn't understand your sign language interpreter. What HR called "communication concerns" were actually the algorithm failing to process how a Deaf Indigenous woman communicates.

Sarah Chen(pseudonym), a five-year Intuit employee with stellar reviews, was blocked from promotion by HireVue's AI system.

A machine decided her career wasn't worth a second look.

This research by Elmira van den Broek, Anastasia V. Sergeeva, and Marleen Huysman from Stockholm School of Economics and Vrije Universiteit Amsterdam explains exactly what went wrong in Sarah's case.

Key Findings

🔄 Fairness Gets Crowded Out: AI doesn't improve or degrade fairness it "crowds out expert practices of performing fairness." HR professionals who previously used contextual judgment were replaced by algorithmic consistency measures.

⚖️ Symbiotic Relationship Forms: The researchers discovered a growing symbiosis between HR's fairness mandate and AI's "scientific" procedures. Each legitimized and protected the other, making alternative fairness definitions impossible.

🏢 Material Power Over Values: Unlike traditional fairness tools (policies, training), AI technologies can "decide what is fair with limited possibilities for human experts to intervene or override." The algorithm becomes the final arbiter.

🔒 Sociomaterial Lock-In: Fairness emerges through how people "define, embed, and perform values with algorithms." Once embedded, the AI's version of fairness becomes organizationally unquestionable.

🎭 Scientific Legitimacy Theater: Organizations adopted AI to appear more "consistent" and "unbiased," but this performance of objectivity actually institutionalized one group's fairness preferences over others.

Why It Matters

For HR Leaders: Your AI hiring choice isn't a neutral technical decision it's permanently encoding one definition of fairness while eliminating others. The four-year study shows this becomes irreversible once implemented.

For Hiring Managers: You're experiencing "crowding out" of expert judgment. The research proves this isn't accidental it's how AI systems fundamentally operate by replacing contextual decision-making with algorithmic consistency.

For Job Seekers: Understanding that AI "fairness" reflects organizational power struggles, not universal principles, helps explain systematic rejections that seem obviously unfair to humans but make sense to algorithms.

For Policymakers: The study reveals that regulating AI bias isn't enough we need democratic processes for deciding whose definition of fairness gets embedded in algorithmic systems that "decide what is fair."

For Everyone: This isn't just hiring. Every AI system claiming to make "fair" decisions, loans, healthcare, criminal justice faces this challenge. We need transparency about whose values are being encoded.

"Let's Make Algorithms Work for Everyone. Human-in-the-Loop is a Must." DataIntell Team

Paper: Read More Learn More:ACLU Case

Keep Reading