Public skepticism over artificial intelligence in hiring gained a fresh voice this week as Richard Stott said he found widespread agreement that automated job interviews are a mistake. His comments reflect a growing debate over how far employers should go in using algorithms to screen candidates, and whether these tools help or harm fair access to work.
Stott did not specify the venue or audience for his remarks, but his point was clear. Many people he spoke with opposed AI-led interviews. The sentiment highlights a broader shift among job seekers and some hiring managers, who worry about bias, privacy, and the loss of human judgment in key decisions.
Richard Stott said he had “unanimous support” from people agreeing AI interviews were “not a good idea.”
Public Pushback and Candidate Concerns
Resentment toward AI interviews has been building for years. Candidates often say one-way video prompts and automated scoring feel cold and confusing. Some fear that a glitch, camera angle, or background noise could affect their score. Others worry that they have no way to appeal a decision.
These concerns are not only about comfort. They are about fairness. AI tools often rely on data from past hires, which can mirror older patterns. That can lock in unequal outcomes if not addressed. Disability advocates also warn that automated tools may not support accommodations well, leaving some applicants at a disadvantage.
Stott’s claim of “unanimous support” against AI-led interviews echoes those views. It suggests a widening gap between what many applicants want and what some employers deploy in the name of speed.
What Supporters Say
Some employers defend AI interviews. They say the tools help teams process thousands of applications more quickly, score candidates on consistent criteria, and reduce scheduling hassles. In theory, structured prompts and standardized scoring could reduce bias from fatigued or distracted interviewers.
Advocates also note that companies can use AI for initial screening only, with people making final decisions. They argue that when built with clear rules and tested for bias, the technology can make hiring more fair, not less. Vendors point to features such as transcript review, role-based scoring rubrics, and audit logs that document decisions.
- Speed and scale for high-volume hiring
- Standardized questions and scoring
- Audit trails and bias testing, when applied
Regulatory and Ethical Scrutiny
Governments are moving to apply rules. In New York City, employers using automated hiring tools must conduct bias audits and notify candidates. The European Union’s AI Act places employment-related AI in a high-risk category, which brings documentation and oversight duties. In the United Kingdom, the data regulator has warned organizations to test systems for discrimination and to give explanations on request.
These rules are forcing changes. One major vendor dropped facial analysis from its interview product after experts questioned accuracy and fairness. Many employers now ask how to explain an AI decision, which data went into it, and who is accountable when it goes wrong.
Ethicists emphasize human review. They say employers should keep a person in the loop, publish clear criteria, and allow candidates to request alternatives. Transparency about how the tool works and what it measures is becoming a standard expectation.
Industry Impact and Next Steps
The market for hiring software remains large, but the tone has shifted. Companies that rushed to automate are now slowing down to audit systems and rethink design. Some are rolling back AI-led interviews in favor of human screens, or offering both options to candidates.
Stott’s report of unanimous pushback may signal a tipping point for perception, even if it does not reflect every workplace. Employers that keep AI interviews will face pressure to prove they are accurate, fair, and accessible. Vendors will need to show clear evidence that their models improve outcomes for all groups, not just efficiency metrics.
For job seekers, the message is mixed. AI will likely remain part of hiring, but in more limited roles, such as scheduling, skills matching, and structured note-taking for human interviewers. Where AI interviews persist, candidates may see clearer disclosures, bias testing results, and human reviews before final decisions.
Stott’s comments capture a public mood that is hard to ignore. The next phase will turn on trust. Employers that prioritize transparency, testing, and human judgment may avoid the backlash. Those that treat AI as a one-click solution risk losing talent before the first question is even asked.