ARTIFICIAL INTELLIGENCE
🌎 How AI Rejected Derek's 100 Job Applications — Without a Single Human Ever Reading Them

Source:ChatGpt

The story of Derek Mobley really got to me.

Imagine spending nearly a decade building your career finance, IT, customer service and doing everything right. Then you apply for 100+ jobs. Over seven years.

Every single one rejected.

Not by a hiring manager. Not by a recruiter. By an algorithm. Silently. Often within minutes. Sometimes in the middle of the night.

Derek Mobley, a Black man in his 40s living with anxiety and depression, applied through Workday's AI screening platform for seven years straight. One rejection arrived at 1:50 a.m. less than an hour after he submitted at 12:55 a.m. No human was awake. No human reviewed anything. A machine decided his career wasn't worth a second look.

That's not a glitch. That's the system working exactly as designed.

He wasn't alone. Arshon Harper a Black IT professional with a decade of experience applied to 150 positions at Sirius XM. Rejected from 149. The AI used his zip code and school as proxies for race, learned from decades of biased hiring patterns.

We debate AI bias in chatbots. In image generators. In language models. But rarely in the place where it causes the most direct, life-altering harm: the job application.

Right now, millions are being screened and rejected not by a human who can be held accountable but by an algorithm trained on decades of historical data that reflects every bias that ever existed in the workplace.

So here's what frustrated me: where are the laws that stop this?

But I have good news: researchers M.M. Abdullah Al Mamun Sony, Mohammad Bin Amin, Aysha Ashraf, K.M. Anwarul Islam, Nitai Chandra Debnath, and Gouranga Chandra Debnath from University of Debrecen, BRAC University, Asian University of Bangladesh, State University of Bangladesh, and United International University just published a landmark legal analysis mapping exactly where AI recruitment bias hides and where the laws have dangerous holes.

⚖️ Laws Are Decades Behind: Anti-discrimination laws were written long before algorithms existed. They prohibit bias in principle but say nothing about an AI trained on biased data making the call.

🧠 Non-Binary Individuals Are Invisible: Most AI hiring systems assume binary gender. Non-binary candidates don't fit the model often misranked or auto-rejected. Almost no legal framework explicitly protects them.

🔍 Black Boxes Block Justice: Companies hide algorithms behind trade secret claims. Enforcement agencies lack the technical expertise to investigate. Derek had no idea a machine was rejecting him that's by design.

📊 No Country Has It Right: Comparing the EU, US, and Finland no jurisdiction fully covers scope, enforcement, AI-specific provisions, and protections for all marginalized groups. The EU AI Act (2025) comes closest, but critical gaps remain.

Why It Matters

For HR Professionals: "We didn't know the algorithm was biased" is no longer a defense. Bias audits and human oversight are becoming legal requirements, not best practices.

For Policymakers: The paper is a roadmap. Most urgent: mandatory AI bias audits, enforcement bodies with real technical expertise, explicit non-binary protections in employment law.

For Job Seekers: The first filter in hiring is now algorithmic — trained on yesterday's discrimination. It's not you. It's the system.

"Let's Make Algorithms Work for Everyone. Human-in-the-Loop is a Must." — DataIntell Team

Paper: Read More News:CCN

Keep Reading