Fake Candidates Using AI During Interviews - Nicholas Skeba Info

Fake Candidates Using AI During Interviews

Hey folks — I want to unpack something increasingly wild and real: fake candidates powered by AI showing up in job interviews. This isn’t sci-fi anymore — it’s happening. I’m writing this someone who’s seen both sides (jobseeker, hiring stakeholder), so expect a mix of empathy + practical thinking.


What do I mean by “fake candidates”?

By “fake candidate,” I mean someone who doesn’t truly exist (or exists differently than presented), but is constructed or supported via AI, and successfully makes it into parts of the hiring funnel — sometimes even interviews. It can include:

  • AI-generated profiles + resumes + headshots
  • Deepfake or avatar video interviews
  • Real people behind the scenes controlling AI avatars
  • Candidates using live AI assistance (teleprompters, hidden AIs) to feign competence

This is more than exaggeration or resume-padding — it’s deception at a meta level.


Why this is a growing problem

So yeah — this is a security risk, a trust risk, and a cost risk.

  • It’s easier than ever. A cybersecurity firm found that a fake video candidate can be created in as little as 70 minutes. Read More →
  • Volume abuse. Many fake or semi-fake profiles are created in bulk to spam applicant tracking systems or job boards. Read More →
  • Deepfakes in interviews. AI can simulate someone’s face, voice, expressions, or overlay them, making video interviews vulnerable. Read More →
  • Falling trust in virtual processes. Some firms are pushing for more in-person rounds because they worry virtual is too easy to game. Read More →
  • Scale forecasts. Gartner projects that by 2028, 1 in 4 candidate profiles globally could be fake. Read More →
  • Real damage cases. Some fake candidates’ motives aren’t just “get a job” — they aim to infiltrate systems, steal data, or even act as trojans. Read More →

What about the ethics & the future?

A few reflections and caveats, because this isn’t all black & white:

  • Not all AI use is “fraudulent”: Candidates might use AI to polish their resumes or frame answers more articulately. This is a gray area: is that unfair or just modern “sharpening your edge”?
  • Privacy & bias concerns: Identity verification and deepfake detection often involve biometric data. This raises privacy issues and potential bias (e.g. facial recognition misclassifies certain populations).
  • Arms race: As detection improves, so will the sophistication of fakes. Today’s detection methods might be obsolete in a few years.
  • Changing expectations: In some tech roles, using AI tools is part of the job. Some companies might legitimize “open-book style” interviewing or allow AI assistance under monitored settings.
  • Fairness for earnest candidates: Overzealous verification or skeptical interviewers might penalize genuinely qualified candidates, especially those without resources to produce “perfect” media.