
ARTIFICIAL INTELLIGENCE
🌎 What If "Fair" AI Isn't About Being Fair But About Who Gets to Define What Fair Means?

src:chatgpt
AI systems don't decide what is fair on their own; people give them that power.
Read that sentence again.
Algorithms gain influence from the people and departments that choose to implement them, authorize their use, and position them as "objective" or "fair."
I have been reading through the new research paper titled "New Research on AI and Fairness in Hiring." The story inside this paper reveals conflicts that occur within organizational recruitment processes.
Here's what happened: Imagine working at the same company for five years. Tax season after tax season, you exceed every performance target. Your manager watches you handle complex customer cases with precision. When the promotion opens up, he's certain you're the perfect fit.
"She's exactly what we need," he tells HR. "Five years of excellent performance. She speaks three languages and has the judgment for complex situations."
Then the AI interviewer rejects you in two weeks. No explanation. No human feedback. Just an algorithmic decision.
Not because you lacked qualifications. Because the machine couldn't understand your sign language interpreter. What HR called "communication concerns" were actually the algorithm failing to process how a Deaf Indigenous woman communicates.
Sarah Chen, a five-year Intuit employee with stellar reviews, was blocked from promotion by HireVue's AI system. A machine decided her career wasn't worth a second look.
This research by Elmira van den Broek, Anastasia V. Sergeeva, and Marleen Huysman from Stockholm School of Economics and Vrije Universiteit Amsterdam explains exactly what went wrong in Sarah's case.
Key Findings
🔄 Fairness Gets Locked In: When AI enters hiring, it doesn't eliminate bias. It locks in one specific definition of "fairness" and won't recognize any other version later.
⚖️ Three Competing Definitions: Organizations battle over algorithmic consistency (treating everyone the same), managerial judgment (recognizing context), and business outcomes (hiring people who perform well).
🏢 Power Determines "Fair": HR champions algorithmic "consistency," managers fight for contextual judgment, business leaders want results. Whichever group wins the political battle gets their fairness definition encoded.
🔒 Change Becomes Impossible: Once deployed, the algorithm's fairness assumptions become organizational policy, even when producing obviously unfair outcomes.
🎭 Objectivity Theater: Companies use AI to appear "bias-free," but bias just moves from individual decisions to systemic design choices.
Why It Matters
For HR Leaders: Your AI hiring choice isn't technical it's a values decision about what fairness your organization will enforce forever.
For Hiring Managers: You're not dealing with "objective" systems. You're experiencing enforcement of someone else's fairness definition that ignores your contextual judgment.
For Job Seekers: "Communication concerns" flagged by AI often reflect the system's inability to understand diverse styles, not actual job deficiencies.
For Everyone: This isn't just hiring. Every "fair" AI systemloans, sentencing, healthcare faces this challenge. We need democratic processes to decide whose definition of fairness wins.
"Let's Make Algorithms Work for Everyone. Human-in-the-Loop is a Must." — DataIntell Team
Paper: Read More