The Downside of Relying on AI in Recruiting

Aug 4, 2025

You're an engineering manager at an environmental consulting firm working with a recruiting firm to fill several open positions in your open office. Despite receiving hundreds of resumes each week, finding qualified candidates feels impossible. Your inbox is flooded with what appears to be promising applications, but when you reach out, many candidates simply don't exist. Others have resumes that are just your job description rewritten as an incomprehensible word salad.

Welcome to the dystopian reality of AI-driven recruiting in 2024.

"The amount of AI-generated garbage on both sides makes it nearly impossible to sift through and get to a real person," laments one frustrated recruiter on Reddit. "Pretty much every job posting we put up gets inundated by tens, if not hundreds of AI-generated garbage resumes & fake candidates."

While AI promises unprecedented efficiency in talent acquisition—with projections suggesting its use will double by 2030—an over-reliance on these tools introduces significant risks. From perpetuating bias to dehumanizing the hiring process, the unchecked use of AI is creating as many problems as it solves.

Drowning in LinkedIn messages from fake candidates?

The AI Arms Race Has Ruined Hiring

Today's recruitment landscape has become a frustrating battle between AI-powered application tools and AI-powered screening tools. On one side, job seekers use AI to scrape LinkedIn postings and generate "perfect" resumes. On the other, companies deploy automated tracking systems (ATS) to screen the flood of applications.

The result? A market where:

  • Junior level engineers can't get past AI screeners without padding their resumes

  • Senior/team lead positions receive applications from candidates who don't actually exist

  • Internal transfers are overlooked in favor of AI-selected external candidates

  • Genuine qualified candidates get lost in the shuffle of algorithm-pleasing applications

One recruiter shared: "AI-generated resumes were plugging up their inboxes with fake candidates, who on paper were amazing - but really didn't exist." Another noted receiving a resume that was "literally just the job description and role written back word for word."

Algorithmic Bias: The Illusion of Objectivity

A fundamental problem with AI recruiting tools is the mistaken belief that algorithms are objective. In reality, they learn from historical hiring data—data that often reflects existing biases.

Amazon's infamous AI recruiting experiment provides a sobering example. The company had to scrap its AI tool after discovering it penalized resumes containing the word "women's" and downgraded graduates from all-women's colleges. Why? Because the system was trained on a decade of predominantly male hires.

The video interview analysis tool HireVue faced similar controversy for favoring certain facial expressions and speech patterns, potentially disadvantaging neurodivergent candidates or those from different cultural backgrounds.

These biases manifest in several ways:

  1. Training Data Bias: Systems learn from datasets reflecting historical hiring patterns that favor specific demographics

  2. Flawed Data Sampling: Non-representative datasets lead to algorithms favoring over-represented groups

  3. Feature Selection Bias: Systems may prioritize features correlated with privilege, such as attendance at prestigious universities

For environmental consulting firms and other specialized industries, these biases can be particularly problematic. The hiring criteria for specialized roles often require nuanced evaluation that algorithms struggle to capture.

The Dehumanization of Hiring

Perhaps the most concerning aspect of AI-driven recruiting is how it strips away the human element that makes successful hiring possible. Over-automation risks turning recruitment into a transactional, robotic process that overlooks critical human elements.

"How do you tell an ATS that I want a 'smart' person that can solve problems regardless of the field they are in?" asks one engineering manager. This highlights AI's fundamental limitation—it cannot grasp nuanced qualifications or assess crucial attributes like:

  • Communication style

  • Emotional intelligence

  • Cultural fit within an open office environment

  • Problem-solving approach

  • Genuine passion for environmental consulting

The market is bad enough for candidates without the additional barrier of algorithmic gatekeeping. As one hiring professional noted, AI "struggles with assessing cultural fit," emphasizing the need for human assessment in the hiring process.

This dehumanization manifests in two critical failures:

Reclaim your recruiting workflow

False Negatives: Overlooking Qualified Candidates

AI systems frequently reject qualified candidates whose resumes don't contain the exact keywords they're programmed to find. A brilliant junior level engineer might be passed over simply because they didn't include the precise terminology the algorithm seeks.

Many candidates report feeling they need to use their resume as a template to be filled with keywords rather than an accurate representation of their skills and experience.

False Positives: Advancing Unqualified Candidates

Conversely, AI can advance candidates who are skilled at creating GPT-enhanced resumes but lack the actual qualifications for the role. This is particularly problematic for senior/team lead positions where leadership skills and experience are critical but difficult to quantify.

The Data Quality Disaster: Garbage In, Garbage Out

AI's effectiveness depends entirely on the quality of data it processes. In today's recruiting environment, that data is increasingly compromised.

The Flood of AI-Generated Applications

While companies like Google (receiving over 3 million applications) and Goldman Sachs (315,126 applications) implemented AI to handle high volumes, these same tools have enabled a tsunami of low-quality applications.

"Your company created word salad on your job postings, hoping to hire experienced people for entry-level pay, and received word salad back," one commenter astutely observed. This vicious cycle creates a recruiting environment where:

  • Bots scrape LinkedIn for job postings

  • AI generates generic applications

  • ATS systems filter applications based on keywords

  • Human recruiters waste time interviewing candidates who aren't qualified or don't exist

For recruiting firms working with environmental consulting companies and other specialized industries, this creates a particularly challenging landscape. The technical expertise required for these roles cannot be properly assessed by algorithms scanning for buzzwords.

Data Privacy and Security Risks

AI recruitment systems handle vast amounts of sensitive personal data. Organizations must prioritize robust data security and comply with evolving regulations to protect candidate information and avoid legal repercussions.

This is especially concerning for candidates who may be exploring an internal transfer or seeking benefits for candidates that aren't publicly disclosed. Their data could be compromised or mishandled by AI systems lacking proper security protocols.

The Path Forward: A Balanced, Human-Centric Approach

The solution isn't abandoning technology but using it strategically as a tool to augment—not replace—human judgment. Here's how to reclaim the human element in recruiting:

1. Maintain Human Oversight

Balance Automation with Human Touch: Ensure AI tools complement, not replace, recruiters. Maintain personal communication and relationship-building with candidates.

Critically Review AI Decisions: Engineering managers should be the final arbiters of hiring decisions, not algorithms. Review AI recommendations with a critical eye, especially for technical or specialized roles.

One study found that over 90% of employers use automated systems for filtering applications, but the most successful ones maintain significant human oversight.

2. Improve Your Inputs

Invest in Quality Job Descriptions: The garbage-in-garbage-out principle applies strongly here. Clear, specific, and human-friendly job descriptions attract better candidates and improve AI screening accuracy.

Diversify Training Data & Conduct Bias Audits: If using AI, demand transparency from vendors. Ensure datasets are inclusive and regularly audit systems to identify and mitigate bias.

3. Rethink the Recruitment Funnel

Implement Human-Led Screening: Use better screening questions before the technical or AI-driven stages to weed out fake candidates and assess genuine interest.

Prioritize Networking: "The best way is just networking; work through people you know to get connected," advises one recruitment expert. For specialized roles in environmental consulting or engineering, industry connections often yield better candidates than blind applications.

Bring Back Personal Interaction: Consider reintroducing in-person or mandatory video-on interviews to establish genuine connection and eliminate fraud. As one recruiter noted, suspicious candidates often claim their "camera 'unfortunately doesn't' work."

Conclusion

AI in recruiting is a powerful tool, but its unchecked implementation has created a dystopian hiring practice for many. The future of effective talent acquisition lies not in more automation but in a smarter, more ethical partnership between technology and human intuition.

For engineering managers and environmental consultants seeking to build high-performing teams, this means being thoughtful about where and how AI fits into your recruitment strategy. The market is bad enough without letting algorithms make it worse.

By focusing on quality data, maintaining stringent human oversight, and fostering genuine connections, organizations can navigate the pitfalls of AI and build stronger, more diverse teams that truly meet their hiring criteria.

The most successful recruiters aren't those with the most sophisticated AI—they're those who remember that hiring is fundamentally about humans connecting with humans.

Frequently Asked Questions

What is the main problem with using AI in recruiting?

The main problem is that the overuse of AI has created a dysfunctional "arms race" between AI-powered application tools and AI-powered screening tools. This results in a flood of low-quality or fake applications, making it incredibly difficult for recruiters to find genuine, qualified candidates. It also leads to issues like algorithmic bias and the dehumanization of the hiring process.

How does AI introduce bias into the hiring process?

AI introduces bias primarily through the data it's trained on. If historical hiring data reflects a bias towards certain demographics (e.g., predominantly male hires), the AI will learn and perpetuate those biases, unfairly penalizing candidates from underrepresented groups. This can happen when the training data is flawed or when the AI prioritizes features correlated with privilege, such as attendance at specific universities.

Why are qualified candidates rejected by AI screeners?

Qualified candidates are often rejected because AI screeners rely heavily on exact keyword matching. A talented applicant might be overlooked—creating a "false negative"—simply because their resume doesn't use the specific terminology the algorithm is programmed to find. This forces candidates to focus on pleasing an algorithm rather than accurately representing their skills and experience.

How can companies combat the flood of AI-generated resumes?

Companies can combat fake resumes by adopting a more human-centric screening process. This includes writing clear and specific job descriptions to attract better applicants, implementing human-led screening questions before the AI-driven stages, and prioritizing networking and personal interaction. Reintroducing mandatory video-on interviews is another effective tactic to verify a candidate's identity and weed out fraud.

What is the best way to use AI in recruiting?

The best way to use AI in recruiting is as a tool to augment, not replace, human judgment. A balanced approach involves using AI to handle high-volume, repetitive tasks while ensuring humans have the final say in all hiring decisions. This means critically reviewing AI recommendations, maintaining personal communication with candidates, and focusing on building genuine relationships to assess qualities like cultural fit and problem-solving skills that algorithms cannot measure.

How does over-automation dehumanize hiring?

Over-automation dehumanizes hiring by turning it into a transactional, robotic process that ignores crucial human qualities. AI systems struggle to assess nuanced attributes like communication style, emotional intelligence, and cultural fit. This algorithmic gatekeeping prevents managers from connecting with candidates on a human level and understanding their true potential beyond keywords on a resume.

On This Page